A successful trap design for capturing large terrestrial snakes
Shirley J. Burgdorf; D. Craig Rudolph; Richard N. Conner; Daniel Saenz; Richard R. Schaefer
2005-01-01
Large scale trapping protocols for snakes can be expensive and require large investments of personnel and time. Typical methods, such as pitfall and small funnel traps, are not useful or suitable for capturing large snakes. A method was needed to survey multiple blocks of habitat for the Louisiana Pine Snake (Pituophis ruthveni), throughout its...
Reliability-based optimization design of geosynthetic reinforced road embankment.
DOT National Transportation Integrated Search
2014-07-01
Road embankments are typically large earth structures, the construction of which requires for large amounts of competent fill soil. In order to limit costs, the utilization of geosynthetics in road embankments allows for construction of steep slopes ...
Interactive computer graphics and its role in control system design of large space structures
NASA Technical Reports Server (NTRS)
Reddy, A. S. S. R.
1985-01-01
This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.
The State of Sensor Technology and Air Quality Monitoring
Produces data of known value and highly reliableStationary- cannot be easily relocatedInstruments are often large and require a building to support their operationExpensive to purchase and operate (typically > $20K each)Requires frequent visits by highly trained staff to check on...
Observations from Laboratory and Field-based Evaluations of Select Low Cost Sensor Performance
• Produces data of known value and highly reliable• Stationary- cannot be easily relocated• Instruments are often large and require a building to support their operation• Expensive to purchase and operate (typically > $20K each)• Requires frequent visi...
HIGH VOLUME INJECTION FOR GCMS ANALYSIS OF PARTICULATE ORGANIC SPECIES IN AMBIENT AIR
Detection of organic species in ambient particulate matter typically requires large air sample volumes, frequently achieved by grouping samples into monthly composites. Decreasing the volume of air sample required would allow shorter collection times and more convenient sample c...
Portraiture lens concept in a mobile phone camera
NASA Astrophysics Data System (ADS)
Sheil, Conor J.; Goncharov, Alexander V.
2017-11-01
A small form-factor lens was designed for the purpose of portraiture photography, the size of which allows use within smartphone casing. The current general requirement of mobile cameras having good all-round performance results in a typical, familiar, many-element design. Such designs have little room for improvement, in terms of the available degrees of freedom and highly-demanding target metrics such as low f-number and wide field of view. However, the specific application of the current portraiture lens relaxed the requirement of an all-round high-performing lens, allowing improvement of certain aspects at the expense of others. With a main emphasis on reducing depth of field (DoF), the current design takes advantage of the simple geometrical relationship between DoF and pupil diameter. The system has a large aperture, while a reasonable f-number gives a relatively large focal length, requiring a catadioptric lens design with double ray path; hence, field of view is reduced. Compared to typical mobile lenses, the large diameter reduces depth of field by a factor of four.
Imputation of unordered markers and the impact on genomic selection accuracy
USDA-ARS?s Scientific Manuscript database
Genomic selection, a breeding method that promises to accelerate rates of genetic gain, requires dense, genome-wide marker data. Genotyping-by-sequencing can generate a large number of de novo markers. However, without a reference genome, these markers are unordered and typically have a large propo...
Design and Modeling of a Variable Heat Rejection Radiator
NASA Technical Reports Server (NTRS)
Miller, Jennifer R.; Birur, Gajanana C.; Ganapathi, Gani B.; Sunada, Eric T.; Berisford, Daniel F.; Stephan, Ryan
2011-01-01
Variable Heat Rejection Radiator technology needed for future NASA human rated & robotic missions Primary objective is to enable a single loop architecture for human-rated missions (1) Radiators are typically sized for maximum heat load in the warmest continuous environment resulting in a large panel area (2) Large radiator area results in fluid being susceptible to freezing at low load in cold environment and typically results in a two-loop system (3) Dual loop architecture is approximately 18% heavier than single loop architecture (based on Orion thermal control system mass) (4) Single loop architecture requires adaptability to varying environments and heat loads
ERIC Educational Resources Information Center
Kroopnick, Marc Howard
2010-01-01
When Item Response Theory (IRT) is operationally applied for large scale assessments, unidimensionality is typically assumed. This assumption requires that the test measures a single latent trait. Furthermore, when tests are vertically scaled using IRT, the assumption of unidimensionality would require that the battery of tests across grades…
Concurrent access to a virtual microscope using a web service oriented architecture
NASA Astrophysics Data System (ADS)
Corredor, Germán.; Iregui, Marcela; Arias, Viviana; Romero, Eduardo
2013-11-01
Virtual microscopy (VM) facilitates visualization and deployment of histopathological virtual slides (VS), a useful tool for education, research and diagnosis. In recent years, it has become popular, yet its use is still limited basically because of the very large sizes of VS, typically of the order of gigabytes. Such volume of data requires efficacious and efficient strategies to access the VS content. In an educative or research scenario, several users may require to access and interact with VS at the same time, so, due to large data size, a very expensive and powerful infrastructure is usually required. This article introduces a novel JPEG2000-based service oriented architecture for streaming and visualizing very large images under scalable strategies, which in addition need not require very specialized infrastructure. Results suggest that the proposed architecture enables transmission and simultaneous visualization of large images, while it is efficient using resources and offering users proper response times.
Spatial coding of object typical size: evidence for a SNARC-like effect.
Sellaro, Roberta; Treccani, Barbara; Job, Remo; Cubelli, Roberto
2015-11-01
The present study aimed to assess whether the representation of the typical size of objects can interact with response position codes in two-choice bimanual tasks, and give rise to a SNARC-like effect (faster responses when the representation of the typical size of the object to which the target stimulus refers corresponds to response side). Participants performed either a magnitude comparison task (in which they were required to judge whether the target was smaller or larger than a reference stimulus; Experiment 1) or a semantic decision task (in which they had to classify the target as belonging to either the category of living or non-living entities; Experiment 2). Target stimuli were pictures or written words referring to either typically large and small animals or inanimate objects. In both tasks, participants responded by pressing a left- or right-side button. Results showed that, regardless of the to-be-performed task (magnitude comparison or semantic decision) and stimulus format (picture or word), left responses were faster when the target represented typically small-sized entities, whereas right responses were faster for typically large-sized entities. These results provide evidence that the information about the typical size of objects is activated even if it is not requested by the task, and are consistent with the idea that objects' typical size is automatically spatially coded, as has been proposed to occur for number magnitudes. In this representation, small objects would be on the left and large objects would be on the right. Alternative interpretations of these results are also discussed.
Evolution of user analysis on the grid in ATLAS
NASA Astrophysics Data System (ADS)
Dewhurst, A.; Legger, F.; ATLAS Collaboration
2017-10-01
More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.
Revision of the Rawls et al. (1982) pedotransfer functions for their applicability to US croplands
USDA-ARS?s Scientific Manuscript database
Large scale environmental impact studies typically involve the use of simulation models and require a variety of inputs, some of which may need to be estimated in absence of adequate measured data. As an example, soil water retention needs to be estimated for a large number of soils that are to be u...
Engineering large-scale agent-based systems with consensus
NASA Technical Reports Server (NTRS)
Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.
1994-01-01
The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.
Estimating air-drying times of small-diameter ponderosa pine and Douglas-fir logs
William T. Simpson; Xiping Wang
2004-01-01
One potential use for small-diameter ponderosa pine and Douglas-fir timber is in log form. Many potential uses of logs require some degree of drying. Even though these small diameters may be considered small in the forestry context, their size when compared to typical lumber thickness dimensions is large. These logs, however, may require uneconomically long kiln-drying...
Choosing the Most Effective Pattern Classification Model under Learning-Time Constraint.
Saito, Priscila T M; Nakamura, Rodrigo Y M; Amorim, Willian P; Papa, João P; de Rezende, Pedro J; Falcão, Alexandre X
2015-01-01
Nowadays, large datasets are common and demand faster and more effective pattern analysis techniques. However, methodologies to compare classifiers usually do not take into account the learning-time constraints required by applications. This work presents a methodology to compare classifiers with respect to their ability to learn from classification errors on a large learning set, within a given time limit. Faster techniques may acquire more training samples, but only when they are more effective will they achieve higher performance on unseen testing sets. We demonstrate this result using several techniques, multiple datasets, and typical learning-time limits required by applications.
Improved Edge Performance in MRF
NASA Technical Reports Server (NTRS)
Shorey, Aric; Jones, Andrew; Durnas, Paul; Tricard, Marc
2004-01-01
The fabrication of large segmented optics requires a polishing process that can correct the figure of a surface to within a short distance from its edges-typically, a few millimeters. The work here is to develop QED's Magnetorheological Finishing (MRF) precision polishing process to minimize residual edge effects.
NASA Astrophysics Data System (ADS)
Andresen, Juan Carlos; Katzgraber, Helmut G.; Schechter, Moshe
2017-12-01
Random fields disorder Ising ferromagnets by aligning single spins in the direction of the random field in three space dimensions, or by flipping large ferromagnetic domains at dimensions two and below. While the former requires random fields of typical magnitude similar to the interaction strength, the latter Imry-Ma mechanism only requires infinitesimal random fields. Recently, it has been shown that for dilute anisotropic dipolar systems a third mechanism exists, where the ferromagnetic phase is disordered by finite-size glassy domains at a random field of finite magnitude that is considerably smaller than the typical interaction strength. Using large-scale Monte Carlo simulations and zero-temperature numerical approaches, we show that this mechanism applies to disordered ferromagnets with competing short-range ferromagnetic and antiferromagnetic interactions, suggesting its generality in ferromagnetic systems with competing interactions and an underlying spin-glass phase. A finite-size-scaling analysis of the magnetization distribution suggests that the transition might be first order.
The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children.
Djalal, Farah Mutiasari; Ameel, Eef; Storms, Gert
2016-01-01
An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children's category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults.
The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children
Ameel, Eef; Storms, Gert
2016-01-01
An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children’s category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults. PMID:27322371
Decline of phosphorus, copper, and zinc in anaerobic lagoon columns receiving pretreated influent
USDA-ARS?s Scientific Manuscript database
Confined swine production generates large volumes of wastewater typically stored and treated in anaerobic lagoons. These lagoons usually require a sludge management plan for their maintenance consisting of regular sludge removal by mechanical agitation and pumping followed by land application at agr...
Sludge reduction and water quality improvement in anaerobic lagoons through influent pre-treatment
USDA-ARS?s Scientific Manuscript database
Confined swine production generates large volumes of wastewater typically stored and treated in anaerobic lagoons. These lagoons may require cleanup and closure measures in the future. In practice, liquid and sludge need to be removed by pumping, usually at great expense of energy, and land applied ...
The Goal-Based Scenario Builder: Experiences with Novice Instructional Designers.
ERIC Educational Resources Information Center
Bell, Benjamin; Korcuska, Michael
Creating educational software generally requires a great deal of computer expertise, and as a result, educators lacking such knowledge have largely been excluded from the design process. Recently, researchers have been designing tools for automating some aspects of building instructional applications. These tools typically aim for generality,…
A Design Rationale Capture Using REMAP/MM
1994-06-01
company-wide down-sizing, the power company has determined that an automated service order processing system is the most economical solution. This new...service order processing system for a large power company can easily be 37 led. A system of this complexity would typically require three to five years
Airloads on Bluff Bodies, with Application to the Rotor-Induced Downloads on Tilt-Rotor Aircraft.
1983-09-01
interference aerodynamics would be tion on hover performance (Ref. (11). to study the two-dimensional sec- tion characteristics of a wing in the wake of a...resources for large numbers of vortices; a typical case requires 10-15 min CPU time on the Ames Cray IS computer. Figure 6 shows a typical result. Here...CPU time per case on a Prime 550UPPER SURFACE (WINDWARD) computer to converge to a steady solution; this would be equivalent to one or two seconds on
Barrett, Lisa Feldman; Barsalou, Lawrence W.
2015-01-01
The tremendous variability within categories of human emotional experience receives little empirical attention. We hypothesized that atypical instances of emotion categories (e.g. pleasant fear of thrill-seeking) would be processed less efficiently than typical instances of emotion categories (e.g. unpleasant fear of violent threat) in large-scale brain networks. During a novel fMRI paradigm, participants immersed themselves in scenarios designed to induce atypical and typical experiences of fear, sadness or happiness (scenario immersion), and then focused on and rated the pleasant or unpleasant feeling that emerged (valence focus) in most trials. As predicted, reliably greater activity in the ‘default mode’ network (including medial prefrontal cortex and posterior cingulate) was observed for atypical (vs typical) emotional experiences during scenario immersion, suggesting atypical instances require greater conceptual processing to situate the socio-emotional experience. During valence focus, reliably greater activity was observed for atypical (vs typical) emotional experiences in the ‘salience’ network (including anterior insula and anterior cingulate), suggesting atypical instances place greater demands on integrating shifting body signals with the sensory and social context. Consistent with emerging psychological construction approaches to emotion, these findings demonstrate that is it important to study the variability within common categories of emotional experience. PMID:24563528
A Computational framework for telemedicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, I.; von Laszewski, G.; Thiruvathukal, G. K.
1998-07-01
Emerging telemedicine applications require the ability to exploit diverse and geographically distributed resources. Highspeed networks are used to integrate advanced visualization devices, sophisticated instruments, large databases, archival storage devices, PCs, workstations, and supercomputers. This form of telemedical environment is similar to networked virtual supercomputers, also known as metacomputers. Metacomputers are already being used in many scientific application areas. In this article, we analyze requirements necessary for a telemedical computing infrastructure and compare them with requirements found in a typical metacomputing environment. We will show that metacomputing environments can be used to enable a more powerful and unified computational infrastructure formore » telemedicine. The Globus metacomputing toolkit can provide the necessary low level mechanisms to enable a large scale telemedical infrastructure. The Globus toolkit components are designed in a modular fashion and can be extended to support the specific requirements for telemedicine.« less
Big questions, big science: meeting the challenges of global ecology.
Schimel, David; Keller, Michael
2015-04-01
Ecologists are increasingly tackling questions that require significant infrastucture, large experiments, networks of observations, and complex data and computation. Key hypotheses in ecology increasingly require more investment, and larger data sets to be tested than can be collected by a single investigator's or s group of investigator's labs, sustained for longer than a typical grant. Large-scale projects are expensive, so their scientific return on the investment has to justify the opportunity cost-the science foregone because resources were expended on a large project rather than supporting a number of individual projects. In addition, their management must be accountable and efficient in the use of significant resources, requiring the use of formal systems engineering and project management to mitigate risk of failure. Mapping the scientific method into formal project management requires both scientists able to work in the context, and a project implementation team sensitive to the unique requirements of ecology. Sponsoring agencies, under pressure from external and internal forces, experience many pressures that push them towards counterproductive project management but a scientific community aware and experienced in large project science can mitigate these tendencies. For big ecology to result in great science, ecologists must become informed, aware and engaged in the advocacy and governance of large ecological projects.
Optimized MCT IR-modules for high-performance imaging applications
NASA Astrophysics Data System (ADS)
Breiter, R.; Eich, D.; Figgemeier, H.; Lutz, H.; Wendler, J.; Rühlich, I.; Rutzinger, S.; Schallenberg, T.
2014-06-01
In today's typical military operations situational awareness is a key element for mission success. In contrast to what is known from conventional warfare with typical targets such as tanks, asymmetric scenarios now dominate military operations. These scenarios require improved identification capabilities, for example the assessment of threat levels posed by personnel targets. Also, it is vital to identify and reliably distinguish between combatants, non-combatants and friendly forces. To satisfy these requirements, high-definition (HD) large format systems are well suited due to their high spatial and thermal resolution combined with high contrast. Typical applications are sights for long-range surveillance, targeting and reconnaissance platforms as well as rotorcraft pilotage sight systems. In 2012 AIM presented first prototypes of large format detectors with 1280 × 1024 elements in a 15μm pitch for both spectral bands MWIR and LWIR. The modular design allows integration of different cooler types, like AIM's split linear coolers SX095 or SX040 or rotary integral types depending whatever fits best to the application. Large format FPAs have been fabricated using liquid phase epitaxy (LPE) or molecular beam epitaxy (MBE) grown MCT. To offer high resolution in a more compact configuration AIM started the development of a 1024 × 768 10μm pitch IRmodule. Keeping electro/optical performance is achieved by a higher specific charge handling capacity of the readout integrated circuit (ROIC) in a 0.18μm Si CMOS technology. The FPA size fits to a dewar cooler configuration used for 640 × 512 15μm pitch modules.
Gender Differences in Reactions to College Course Requirements or "Why Females Are Better Students"
ERIC Educational Resources Information Center
Zusman, Marty; Knox, David; Lieberman, Michelle
2005-01-01
Two-hundred-and-seventy-eight undergraduates at a large southeastern university completed a confidential anonymous forty-item questionnaire designed to assess student reactions to course expectations and the degree to which they engage in behaviors typically associated with positive academic outcomes. Women were significantly more likely to be in…
Area requirements and landscape-level factors influencing shrubland birds
H. Patrick Roberts; David I. King
2017-01-01
Declines in populations of birds that breed in disturbance-dependent early-successional forest have largely been ascribed to habitat loss. Clearcutting is an efficient and effective means for creating earlysuccessional vegetation; however, negative public perceptions of clearcutting and the small parcel size typical of private forested land in much of the eastern...
Treatment of Ion-Atom Collisions Using a Partial-Wave Expansion of the Projectile Wavefunction
ERIC Educational Resources Information Center
Wong, T. G.; Foster, M.; Colgan, J.; Madison, D. H.
2009-01-01
We present calculations of ion-atom collisions using a partial-wave expansion of the projectile wavefunction. Most calculations of ion-atom collisions have typically used classical or plane-wave approximations for the projectile wavefunction, since partial-wave expansions are expected to require prohibitively large numbers of terms to converge…
Team-Based Learning Exercise Efficiently Teaches Brief Intervention Skills to Medicine Residents
ERIC Educational Resources Information Center
Wamsley, Maria A.; Julian, Katherine A.; O'Sullivan, Patricia; McCance-Katz, Elinore F.; Batki, Steven L.; Satre, Derek D.; Satterfield, Jason
2013-01-01
Background: Evaluations of substance use screening and brief intervention (SBI) curricula typically focus on learner attitudes and knowledge, although effects on clinical skills are of greater interest and utility. Moreover, these curricula often require large amounts of training time and teaching resources. This study examined whether a 3-hour…
Work Identities in Comparative Perspectives: The Role of National and Sectoral Context Variables
ERIC Educational Resources Information Center
Kirpal, Simone
2006-01-01
New normative ideas about flexibility, employability and lifelong learning are shifting labour market requirements as they induce flexible employment patterns and new skilling needs. While the model of a typical progressive career based on possession of a particular set of (occupational) skills has been largely undermined, employees are…
USDA-ARS?s Scientific Manuscript database
Potato breeding cycles typically last 6-7 years because of the modest seed multiplication rate and large number of traits required of new varieties. Genomic selection has the potential to increase genetic gain per unit of time, through higher accuracy and/or a shorter cycle. Both possibilities were ...
NASA Astrophysics Data System (ADS)
Kallinikos, N.; Isliker, H.; Vlahos, L.; Meletlidou, E.
2014-06-01
An analytical description of magnetic islands is presented for the typical case of a single perturbation mode introduced to tokamak plasma equilibrium in the large aspect ratio approximation. Following the Hamiltonian structure directly in terms of toroidal coordinates, the well known integrability of this system is exploited, laying out a precise and practical way for determining the island topology features, as required in various applications, through an analytical and exact flux surface label.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kallinikos, N.; Isliker, H.; Vlahos, L.
2014-06-15
An analytical description of magnetic islands is presented for the typical case of a single perturbation mode introduced to tokamak plasma equilibrium in the large aspect ratio approximation. Following the Hamiltonian structure directly in terms of toroidal coordinates, the well known integrability of this system is exploited, laying out a precise and practical way for determining the island topology features, as required in various applications, through an analytical and exact flux surface label.
The fall of the black hole firewall: natural nonmaximal entanglement for the Page curve
NASA Astrophysics Data System (ADS)
Hotta, Masahiro; Sugita, Ayumu
2015-12-01
The black hole firewall conjecture is based on the Page curve hypothesis, which claims that entanglement between a black hole and its Hawking radiation is almost maximum. Adopting canonical typicality for nondegenerate systems with nonvanishing Hamiltonians, we show the entanglement becomes nonmaximal, and energetic singularities (firewalls) do not emerge for general systems. An evaporating old black hole must evolve in Gibbs states with exponentially small error probability after the Page time as long as the states are typical. This means that the ordinarily used microcanonical states are far from typical. The heat capacity computed from the Gibbs states should be nonnegative in general. However, the black hole heat capacity is actually negative due to the gravitational instability. Consequently the states are not typical until the last burst. This requires inevitable modification of the Page curve, which is based on the typicality argument. For static thermal pure states of a large AdS black hole and its Hawking radiation, the entanglement entropy equals the thermal entropy of the smaller system.
Wilson-Mendenhall, Christine D; Barrett, Lisa Feldman; Barsalou, Lawrence W
2015-01-01
The tremendous variability within categories of human emotional experience receives little empirical attention. We hypothesized that atypical instances of emotion categories (e.g. pleasant fear of thrill-seeking) would be processed less efficiently than typical instances of emotion categories (e.g. unpleasant fear of violent threat) in large-scale brain networks. During a novel fMRI paradigm, participants immersed themselves in scenarios designed to induce atypical and typical experiences of fear, sadness or happiness (scenario immersion), and then focused on and rated the pleasant or unpleasant feeling that emerged (valence focus) in most trials. As predicted, reliably greater activity in the 'default mode' network (including medial prefrontal cortex and posterior cingulate) was observed for atypical (vs typical) emotional experiences during scenario immersion, suggesting atypical instances require greater conceptual processing to situate the socio-emotional experience. During valence focus, reliably greater activity was observed for atypical (vs typical) emotional experiences in the 'salience' network (including anterior insula and anterior cingulate), suggesting atypical instances place greater demands on integrating shifting body signals with the sensory and social context. Consistent with emerging psychological construction approaches to emotion, these findings demonstrate that is it important to study the variability within common categories of emotional experience. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Affinity+: Semi-Structured Brainstorming on Large Displays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burtner, Edwin R.; May, Richard A.; Scarberry, Randall E.
2013-04-27
Affinity diagraming is a powerful method for encouraging and capturing lateral thinking in a group environment. The Affinity+ Concept was designed to improve the collaborative brainstorm process through the use of large display surfaces in conjunction with mobile devices like smart phones and tablets. The system works by capturing the ideas digitally and allowing users to sort and group them on a large touch screen manually. Additionally, Affinity+ incorporates theme detection, topic clustering, and other processing algorithms that help bring structured analytic techniques to the process without requiring explicit leadership roles and other overhead typically involved in these activities.
USDA-ARS?s Scientific Manuscript database
The thermal environment in poultry housing is a primary influence on production efficiency and live performance. Heavy broilers (body weight > 3.2 kg) typically require high ventilation rates to maintain thermal comfort and production efficiency. However, large birds are observed to pant in mild to ...
Microcopying wildland maps for distribution and scanner digitizing
Elliot L Amidon; Joyce E. Dye
1976-01-01
Maps for wildland resource inventory and managament purposes typically show vegetation types, soils, and other areal information. For field work, maps must be large-scale. For safekeeping and compact storage, however, they can be reduced onto film, ready to be enlarged on demand by office viewers. By meeting certain simple requirements, film images are potential input...
ERIC Educational Resources Information Center
Margulies, Barry J.; Ghent, Cynthia A.
2005-01-01
Medical Microbiology is a content-intensive course that requires a large time commitment from the students. Students are typically biology or prenursing majors, including students headed for professional schools, such as medical school and pharmacy school. This group is somewhat diverse in terms of background science coursework, so it can be…
Managing the Socioeconomic Impacts of Energy Development. A Guide for the Small Community.
ERIC Educational Resources Information Center
Armbrust, Roberta
Decisions concerning large-scale energy development projects near small communities or in predominantly rural areas are usually complex, requiring cooperation of all levels of government, as well as the general public and the private sector. It is unrealistic to expect the typical small community to develop capabilities to independently evaluate a…
Reduced power processor requirements for the 30-cm diameter HG ion thruster
NASA Technical Reports Server (NTRS)
Rawlin, V. K.
1979-01-01
The characteristics of power processors strongly impact the overall performance and cost of electric propulsion systems. A program was initiated to evaluate simplifications of the thruster-power processor interface requirements. The power processor requirements are mission dependent with major differences arising for those missions which require a nearly constant thruster operating point (typical of geocentric and some inbound planetary missions) and those requiring operation over a large range of input power (such as outbound planetary missions). This paper describes the results of tests which have indicated that as many as seven of the twelve power supplies may be eliminated from the present Functional Model Power Processor used with 30-cm diameter Hg ion thrusters.
On the energy budget in the current disruption region. [of geomagnetic tail
NASA Technical Reports Server (NTRS)
Hesse, Michael; Birn, Joachim
1993-01-01
This study investigates the energy budget in the current disruption region of the magnetotail, coincident with a pre-onset thin current sheet, around substorm onset time using published observational data and theoretical estimates. We find that the current disruption/dipolarization process typically requires energy inflow into the primary disruption region. The disruption dipolarization process is therefore endoenergetic, i.e., requires energy input to operate. Therefore we argue that some other simultaneously operating process, possibly a large scale magnetotail instability, is required to provide the necessary energy input into the current disruption region.
Tackling the challenges of fully immersive head-mounted AR devices
NASA Astrophysics Data System (ADS)
Singer, Wolfgang; Hillenbrand, Matthias; Münz, Holger
2017-11-01
The optical requirements of fully immersive head mounted AR devices are inherently determined by the human visual system. The etendue of the visual system is large. As a consequence, the requirements for fully immersive head-mounted AR devices exceeds almost any high end optical system. Two promising solutions to achieve the large etendue and their challenges are discussed. Head-mounted augmented reality devices have been developed for decades - mostly for application within aircrafts and in combination with a heavy and bulky helmet. The established head-up displays for applications within automotive vehicles typically utilize similar techniques. Recently, there is the vision of eyeglasses with included augmentation, offering a large field of view, and being unobtrusively all-day wearable. There seems to be no simple solution to reach the functional performance requirements. Known technical solutions paths seem to be a dead-end, and some seem to offer promising perspectives, however with severe limitations. As an alternative, unobtrusively all-day wearable devices with a significantly smaller field of view are already possible.
Micrometeoroid and Lunar Secondary Ejecta Flux Measurements: Comparison of Three Acoustic Systems
NASA Technical Reports Server (NTRS)
Corsaro, R. D.; Giovane, F.; Liou, Jer-Chyi; Burtchell, M.; Pisacane, V.; Lagakos, N.; Williams, E.; Stansbery, E.
2010-01-01
This report examines the inherent capability of three large-area acoustic sensor systems and their applicability for micrometeoroids (MM) and lunar secondary ejecta (SE) detection and characterization for future lunar exploration activities. Discussion is limited to instruments that can be fabricated and deployed with low resource requirements. Previously deployed impact detection probes typically have instrumented capture areas less than 0.2 square meters. Since the particle flux decreases rapidly with increased particle size, such small-area sensors rarely encounter particles in the size range above 50 microns, and even their sampling the population above 10 microns is typically limited. Characterizing the sparse dust population in the size range above 50 microns requires a very large-area capture instrument. However it is also important that such an instrument simultaneously measures the population of the smaller particles, so as to provide a complete instantaneous snapshot of the population. For lunar or planetary surface studies, the system constraints are significant. The instrument must be as large as possible to sample the population of the largest MM. This is needed to reliably assess the particle impact risks and to develop cost-effective shielding designs for habitats, astronauts, and critical instrument. The instrument should also have very high sensitivity to measure the flux of small and slow SE particles. is the SE environment is currently poorly characterized, and possess a contamination risk to machinery and personnel involved in exploration. Deployment also requires that the instrument add very little additional mass to the spacecraft. Three acoustic systems are being explored for this application.
Problems With Large Joints: Shoulder Conditions.
Campbell, Michael
2016-07-01
The shoulder is the most mobile joint in the body. It requires an extensive support system to create mobility while providing stability. Although there are many etiologies of shoulder pain, weakness, and instability, most injuries in the shoulder are due to overuse. Rotator cuff tears, labral tears, calcific tendinopathy, and impingement often result from chronic overuse injuries. Acute injuries include dislocations that can cause labral tears or other complications. Frozen shoulder refers to a typically benign condition of restricted range of motion that may spontaneously resolve but can cause prolonged pain and discomfort. The history combined with specific shoulder examination techniques can help family physicians successfully diagnose shoulder conditions. X-ray imaging typically is sufficient to rule out more serious etiologies when evaluating patients with shoulder conditions. However, imaging with magnetic resonance imaging (MRI) study or ultrasonography for rotator cuff tears, and MRI study with intra-articular contrast for labral tears, is needed to confirm these diagnoses. Corticosteroid injections and physical therapy are first-line treatments for most shoulder conditions. Surgical options typically are reserved for patients for whom conservative treatments are ineffective, and typically are performed arthroscopically. Written permission from the American Academy of Family Physicians is required for reproduction of this material in whole or in part in any form or medium.
Radius of Curvature Measurement of Large Optics Using Interferometry and Laser Tracker
NASA Technical Reports Server (NTRS)
Hagopian, John; Connelly, Joseph
2011-01-01
The determination of radius of curvature (ROC) of optics typically uses either a phase measuring interferometer on an adjustable stage to determine the position of the ROC and the optics surface under test. Alternatively, a spherometer or a profilometer are used for this measurement. The difficulty of this approach is that for large optics, translation of the interferometer or optic under test is problematic because of the distance of translation required and the mass of the optic. Profilometry and spherometry are alternative techniques that can work, but require a profilometer or a measurement of subapertures of the optic. The proposed approach allows a measurement of the optic figure simultaneous with the full aperture radius of curvature.
Suzanne M. Joy; R. M. Reich; Richard T. Reynolds
2003-01-01
Traditional land classification techniques for large areas that use Landsat Thematic Mapper (TM) imagery are typically limited to the fixed spatial resolution of the sensors (30m). However, the study of some ecological processes requires land cover classifications at finer spatial resolutions. We model forest vegetation types on the Kaibab National Forest (KNF) in...
Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.
ERIC Educational Resources Information Center
Monagle, E. Brette
The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…
NASA Astrophysics Data System (ADS)
Hill, Christopher T.
We discuss a class of dynamical models in which top condensation occurs at the weak scale, giving rise to the large top quark mass and other phenomena. This typically requires a color embedding, SU(3)C → SU(3)1×SU(3)2, ergo "Topcolor." Topcolor suggests a novel route to technicolor models in which sequential quarks condense under the Topcolor interaction to break electroweak symmetries.
Urban, Lorien E.; Weber, Judith L.; Heyman, Melvin B.; Schichtl, Rachel L.; Verstraete, Sofia; Lowery, Nina S.; Das, Sai Krupa; Schleicher, Molly M.; Rogers, Gail; Economos, Christina; Masters, William A.; Roberts, Susan B.
2017-01-01
Background Excess energy intake from meals consumed away from home is implicated as a major contributor to obesity, and ~50% of US restaurants are individual or small-chain (non–chain) establishments that do not provide nutrition information. Objective To measure the energy content of frequently ordered meals in non–chain restaurants in three US locations, and compare with the energy content of meals from large-chain restaurants, energy requirements, and food database information. Design A multisite random-sampling protocol was used to measure the energy contents of the most frequently ordered meals from the most popular cuisines in non–chain restaurants, together with equivalent meals from large-chain restaurants. Setting Meals were obtained from restaurants in San Francisco, CA; Boston, MA; and Little Rock, AR, between 2011 and 2014. Main outcome measures Meal energy content determined by bomb calorimetry. Statistical analysis performed Regional and cuisine differences were assessed using a mixed model with restaurant nested within region×cuisine as the random factor. Paired t tests were used to evaluate differences between non–chain and chain meals, human energy requirements, and food database values. Results Meals from non–chain restaurants contained 1,205±465 kcal/meal, amounts that were not significantly different from equivalent meals from large-chain restaurants (+5.1%; P=0.41). There was a significant effect of cuisine on non–chain meal energy, and three of the four most popular cuisines (American, Italian, and Chinese) had the highest mean energy (1,495 kcal/meal). Ninety-two percent of meals exceeded typical energy requirements for a single eating occasion. Conclusions Non–chain restaurants lacking nutrition information serve amounts of energy that are typically far in excess of human energy requirements for single eating occasions, and are equivalent to amounts served by the large-chain restaurants that have previously been criticized for providing excess energy. Restaurants in general, rather than specific categories of restaurant, expose patrons to excessive portions that induce overeating through established biological mechanisms. PMID:26803805
Urban, Lorien E; Weber, Judith L; Heyman, Melvin B; Schichtl, Rachel L; Verstraete, Sofia; Lowery, Nina S; Das, Sai Krupa; Schleicher, Molly M; Rogers, Gail; Economos, Christina; Masters, William A; Roberts, Susan B
2016-04-01
Excess energy intake from meals consumed away from home is implicated as a major contributor to obesity, and ∼50% of US restaurants are individual or small-chain (non-chain) establishments that do not provide nutrition information. To measure the energy content of frequently ordered meals in non-chain restaurants in three US locations, and compare with the energy content of meals from large-chain restaurants, energy requirements, and food database information. A multisite random-sampling protocol was used to measure the energy contents of the most frequently ordered meals from the most popular cuisines in non-chain restaurants, together with equivalent meals from large-chain restaurants. Meals were obtained from restaurants in San Francisco, CA; Boston, MA; and Little Rock, AR, between 2011 and 2014. Meal energy content determined by bomb calorimetry. Regional and cuisine differences were assessed using a mixed model with restaurant nested within region×cuisine as the random factor. Paired t tests were used to evaluate differences between non-chain and chain meals, human energy requirements, and food database values. Meals from non-chain restaurants contained 1,205±465 kcal/meal, amounts that were not significantly different from equivalent meals from large-chain restaurants (+5.1%; P=0.41). There was a significant effect of cuisine on non-chain meal energy, and three of the four most popular cuisines (American, Italian, and Chinese) had the highest mean energy (1,495 kcal/meal). Ninety-two percent of meals exceeded typical energy requirements for a single eating occasion. Non-chain restaurants lacking nutrition information serve amounts of energy that are typically far in excess of human energy requirements for single eating occasions, and are equivalent to amounts served by the large-chain restaurants that have previously been criticized for providing excess energy. Restaurants in general, rather than specific categories of restaurant, expose patrons to excessive portions that induce overeating through established biological mechanisms. Copyright © 2016 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Flexibility of space structures makes design shaky
NASA Technical Reports Server (NTRS)
Hearth, D. P.; Boyer, W. J.
1985-01-01
An evaluation is made of the development status of high stiffness space structures suitable for orbital construction or deployment of large diameter reflector antennas, with attention to the control system capabilities required by prospective space structure system types. The very low structural frequencies typical of very large, radio frequency antenna structures would be especially difficult for a control system to counteract. Vibration control difficulties extend across the frequency spectrum, even to optical and IR reflector systems. Current research and development efforts are characterized with respect to goals and prospects for success.
Definition of large components assembled on-orbit and robot compatible mechanical joints
NASA Technical Reports Server (NTRS)
Williamsen, J.; Thomas, F.; Finckenor, J.; Spiegel, B.
1990-01-01
One of four major areas of project Pathfinder is in-space assembly and construction. The task of in-space assembly and construction is to develop the requirements and the technology needed to build elements in space. A 120-ft diameter tetrahedral aerobrake truss is identified as the focus element. A heavily loaded mechanical joint is designed to robotically assemble the defined aerobrake element. Also, typical large components such as habitation modules, storage tanks, etc., are defined, and attachment concepts of these components to the tetrahedral truss are developed.
Robust Requirements Tracing via Internet Search Technology: Improving an IV and V Technique. Phase 2
NASA Technical Reports Server (NTRS)
Hayes, Jane; Dekhtyar, Alex
2004-01-01
There are three major objectives to this phase of the work. (1) Improvement of Information Retrieval (IR) methods for Independent Verification and Validation (IV&V) requirements tracing. Information Retrieval methods are typically developed for very large (order of millions - tens of millions and more documents) document collections and therefore, most successfully used methods somewhat sacrifice precision and recall in order to achieve efficiency. At the same time typical IR systems treat all user queries as independent of each other and assume that relevance of documents to queries is subjective for each user. The IV&V requirements tracing problem has a much smaller data set to operate on, even for large software development projects; the set of queries is predetermined by the high-level specification document and individual requirements considered as query input to IR methods are not necessarily independent from each other. Namely, knowledge about the links for one requirement may be helpful in determining the links of another requirement. Finally, while the final decision on the exact form of the traceability matrix still belongs to the IV&V analyst, his/her decisions are much less arbitrary than those of an Internet search engine user. All this suggests that the information available to us in the framework of the IV&V tracing problem can be successfully leveraged to enhance standard IR techniques, which in turn would lead to increased recall and precision. We developed several new methods during Phase II; (2) IV&V requirements tracing IR toolkit. Based on the methods developed in Phase I and their improvements developed in Phase II, we built a toolkit of IR methods for IV&V requirements tracing. The toolkit has been integrated, at the data level, with SAIC's SuperTracePlus (STP) tool; (3) Toolkit testing. We tested the methods included in the IV&V requirements tracing IR toolkit on a number of projects.
Heterogeneous Superconducting Low-Noise Sensing Coils
NASA Technical Reports Server (NTRS)
Hahn, Inseob; Penanen, Konstantin I.; Ho Eom, Byeong
2008-01-01
A heterogeneous material construction has been devised for sensing coils of superconducting quantum interference device (SQUID) magnetometers that are subject to a combination of requirements peculiar to some advanced applications, notably including low-field magnetic resonance imaging for medical diagnosis. The requirements in question are the following: The sensing coils must be large enough (in some cases having dimensions of as much as tens of centimeters) to afford adequate sensitivity; The sensing coils must be made electrically superconductive to eliminate Johnson noise (thermally induced noise proportional to electrical resistance); and Although the sensing coils must be cooled to below their superconducting- transition temperatures with sufficient cooling power to overcome moderate ambient radiative heat leakage, they must not be immersed in cryogenic liquid baths. For a given superconducting sensing coil, this combination of requirements can be satisfied by providing a sufficiently thermally conductive link between the coil and a cold source. However, the superconducting coil material is not suitable as such a link because electrically superconductive materials are typically poor thermal conductors. The heterogeneous material construction makes it possible to solve both the electrical- and thermal-conductivity problems. The basic idea is to construct the coil as a skeleton made of a highly thermally conductive material (typically, annealed copper), then coat the skeleton with an electrically superconductive alloy (typically, a lead-tin solder) [see figure]. In operation, the copper skeleton provides the required thermally conductive connection to the cold source, while the electrically superconductive coating material shields against Johnson noise that originates in the copper skeleton.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheib, J.; Pless, S.; Torcellini, P.
NREL experienced a significant increase in employees and facilities on our 327-acre main campus in Golden, Colorado over the past five years. To support this growth, researchers developed and demonstrated a new building acquisition method that successfully integrates energy efficiency requirements into the design-build requests for proposals and contracts. We piloted this energy performance based design-build process with our first new construction project in 2008. We have since replicated and evolved the process for large office buildings, a smart grid research laboratory, a supercomputer, a parking structure, and a cafeteria. Each project incorporated aggressive efficiency strategies using contractual energy usemore » requirements in the design-build contracts, all on typical construction budgets. We have found that when energy efficiency is a core project requirement as defined at the beginning of a project, innovative design-build teams can integrate the most cost effective and high performance efficiency strategies on typical construction budgets. When the design-build contract includes measurable energy requirements and is set up to incentivize design-build teams to focus on achieving high performance in actual operations, owners can now expect their facilities to perform. As NREL completed the new construction in 2013, we have documented our best practices in training materials and a how-to guide so that other owners and owner's representatives can replicate our successes and learn from our experiences in attaining market viable, world-class energy performance in the built environment.« less
Concave Surround Optics for Rapid Multi-View Imaging
2006-11-01
thus is amenable to capturing dynamic events avoiding the need to construct and calibrate an array of cameras. We demonstrate the system with a high...hard to assemble and calibrate . In this paper we present an optical system capable of rapidly moving the viewpoint around a scene. Our system...flexibility, large camera arrays are typically expensive and require significant effort to calibrate temporally, geometrically and chromatically
Space Weather Research at the National Science Foundation
NASA Astrophysics Data System (ADS)
Moretto, T.
2015-12-01
There is growing recognition that the space environment can have substantial, deleterious, impacts on society. Consequently, research enabling specification and forecasting of hazardous space effects has become of great importance and urgency. This research requires studying the entire Sun-Earth system to understand the coupling of regions all the way from the source of disturbances in the solar atmosphere to the Earth's upper atmosphere. The traditional, region-based structure of research programs in Solar and Space physics is ill suited to fully support the change in research directions that the problem of space weather dictates. On the observational side, dense, distributed networks of observations are required to capture the full large-scale dynamics of the space environment. However, the cost of implementing these is typically prohibitive, especially for measurements in space. Thus, by necessity, the implementation of such new capabilities needs to build on creative and unconventional solutions. A particularly powerful idea is the utilization of new developments in data engineering and informatics research (big data). These new technologies make it possible to build systems that can collect and process huge amounts of noisy and inaccurate data and extract from them useful information. The shift in emphasis towards system level science for geospace also necessitates the development of large-scale and multi-scale models. The development of large-scale models capable of capturing the global dynamics of the Earth's space environment requires investment in research team efforts that go beyond what can typically be funded under the traditional grants programs. This calls for effective interdisciplinary collaboration and efficient leveraging of resources both nationally and internationally. This presentation will provide an overview of current and planned initiatives, programs, and activities at the National Science Foundation pertaining to space weathe research.
But I'm an engineer—not a contracts lawyer!
NASA Astrophysics Data System (ADS)
Warner, Mark; Bass, Harvey
2012-09-01
Industrial partners, commercial vendors, and subsystem contractors play a large role in the design and construction of modern telescopes. Because many telescope projects carry relatively small staffs, engineers are often required to perform the additional functions of technical writing, cost estimating, and contract bidding and negotiating. The skills required to carry out these tasks are not normally taught in traditional engineering programs. As a result, engineers often learn to write Request for Proposals (RFPs), select vendors, and negotiate contracts by trial-and-error and/or by adapting previous project documents to match their own requirements. Typically, this means that at the end of a contract the engineer has a large list of do's, don'ts, and lessons learned for the next RFP he or she must generate. This paper will present one such engineer's experience writing and bidding proposal packages for large telescope components and subsystems. Included are: thoughts on structuring SOWs, Specs, ICDs, and other RFP documents; modern methods for bidding the work; and systematic means for selecting and negotiating with a contractor to arrive at the best value for the project.
Resolution requirements for aero-optical simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mani, Ali; Wang Meng; Moin, Parviz
2008-11-10
Analytical criteria are developed to estimate the error of aero-optical computations due to inadequate spatial resolution of refractive index fields in high Reynolds number flow simulations. The unresolved turbulence structures are assumed to be locally isotropic and at low turbulent Mach number. Based on the Kolmogorov spectrum for the unresolved structures, the computational error of the optical path length is estimated and linked to the resulting error in the computed far-field optical irradiance. It is shown that in the high Reynolds number limit, for a given geometry and Mach number, the spatial resolution required to capture aero-optics within a pre-specifiedmore » error margin does not scale with Reynolds number. In typical aero-optical applications this resolution requirement is much lower than the resolution required for direct numerical simulation, and therefore, a typical large-eddy simulation can capture the aero-optical effects. The analysis is extended to complex turbulent flow simulations in which non-uniform grid spacings are used to better resolve the local turbulence structures. As a demonstration, the analysis is used to estimate the error of aero-optical computation for an optical beam passing through turbulent wake of flow over a cylinder.« less
Optical fiber designs for beam shaping
NASA Astrophysics Data System (ADS)
Farley, Kevin; Conroy, Michael; Wang, Chih-Hao; Abramczyk, Jaroslaw; Campbell, Stuart; Oulundsen, George; Tankala, Kanishka
2014-03-01
A large number of power delivery applications for optical fibers require beams with very specific output intensity profiles; in particular applications that require a focused high intensity beam typically image the near field (NF) intensity distribution at the exit surface of an optical fiber. In this work we discuss optical fiber designs that shape the output beam profile to more closely correspond to what is required in many real world industrial applications. Specifically we present results demonstrating the ability to transform Gaussian beams to shapes required for industrial applications and how that relates to system parameters such as beam product parameter (BPP) values. We report on the how different waveguide structures perform in the NF and show results on how to achieve flat-top with circular outputs.
NASA Technical Reports Server (NTRS)
Schwan, Karsten
1994-01-01
Atmospheric modeling is a grand challenge problem for several reasons, including its inordinate computational requirements and its generation of large amounts of data concurrent with its use of very large data sets derived from measurement instruments like satellites. In addition, atmospheric models are typically run several times, on new data sets or to reprocess existing data sets, to investigate or reinvestigate specific chemical or physical processes occurring in the earth's atmosphere, to understand model fidelity with respect to observational data, or simply to experiment with specific model parameters or components.
The NASA Space Launch System Program Systems Engineering Approach for Affordability
NASA Technical Reports Server (NTRS)
Hutt, John J.; Whitehead, Josh; Hanson, John
2017-01-01
The National Aeronautics and Space Administration is currently developing the Space Launch System to provide the United States with a capability to launch large Payloads into Low Earth orbit and deep space. One of the development tenets of the SLS Program is affordability. One initiative to enhance affordability is the SLS approach to requirements definition, verification and system certification. The key aspects of this initiative include: 1) Minimizing the number of requirements, 2) Elimination of explicit verification requirements, 3) Use of certified models of subsystem capability in lieu of requirements when appropriate and 4) Certification of capability beyond minimum required capability. Implementation of each aspect is described and compared to a "typical" systems engineering implementation, including a discussion of relative risk. Examples of each implementation within the SLS Program are provided.
Unfurlable satellite antennas - A review
NASA Technical Reports Server (NTRS)
Roederer, Antoine G.; Rahmat-Samii, Yahia
1989-01-01
A review of unfurlable satellite antennas is presented. Typical application requirements for future space missions are first outlined. Then, U.S. and European mesh and inflatable antenna concepts are described. Precision deployables using rigid panels or petals are not included in the survey. RF modeling and performance analysis of gored or faceted mesh reflector antennas are then reviewed. Finally, both on-ground and in-orbit RF test techniques for large unfurlable antennas are discussed.
Multiscale numerical simulations of magnetoconvection at low magnetic Prandtl and Rossby numbers.
NASA Astrophysics Data System (ADS)
Maffei, S.; Calkins, M. A.; Julien, K. A.; Marti, P.
2017-12-01
The dynamics of the Earth's outer core is characterized by low values of the Rossby (Ro), Ekman and magnetic Prandtl numbers. These values indicate the large spectra of temporal and spatial scales that need to be accounted for in realistic numerical simulations of the system. Current direct numerical simulation are not capable of reaching this extreme regime, suggesting that a new class of models is required to account for the rich dynamics expected in the natural system. Here we present results from a quasi-geostrophic, multiscale model based on the scale separation implied by the low Ro typical of rapidly rotating systems. We investigate a plane layer geometry where convection is driven by an imposed temperature gradient and the hydrodynamic equations are modified by a large scale magnetic field. Analytical investigation shows that at values of thermal and magnetic Prandtl numbers relevant for liquid metals, the energetic requirements for the onset of convection is not significantly altered even in the presence of strong magnetic fields. Results from strongly forced nonlinear numerical simulations show the presence of an inverse cascade, typical of 2-D turbulence, when no or weak magnetic field is applied. For higher values of the magnetic field the inverse cascade is quenched.
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Muller, Christoff
2015-01-01
Climate change is a significant risk for agricultural production. Even under optimistic scenarios for climate mitigation action, present-day agricultural areas are likely to face significant increases in temperatures in the coming decades, in addition to changes in precipitation, cloud cover, and the frequency and duration of extreme heat, drought, and flood events (IPCC, 2013). These factors will affect the agricultural system at the global scale by impacting cultivation regimes, prices, trade, and food security (Nelson et al., 2014a). Global-scale evaluation of crop productivity is a major challenge for climate impact and adaptation assessment. Rigorous global assessments that are able to inform planning and policy will benefit from consistent use of models, input data, and assumptions across regions and time that use mutually agreed protocols designed by the modeling community. To ensure this consistency, large-scale assessments are typically performed on uniform spatial grids, with spatial resolution of typically 10 to 50 km, over specified time-periods. Many distinct crop models and model types have been applied on the global scale to assess productivity and climate impacts, often with very different results (Rosenzweig et al., 2014). These models are based to a large extent on field-scale crop process or ecosystems models and they typically require resolved data on weather, environmental, and farm management conditions that are lacking in many regions (Bondeau et al., 2007; Drewniak et al., 2013; Elliott et al., 2014b; Gueneau et al., 2012; Jones et al., 2003; Liu et al., 2007; M¨uller and Robertson, 2014; Van den Hoof et al., 2011;Waha et al., 2012; Xiong et al., 2014). Due to data limitations, the requirements of consistency, and the computational and practical limitations of running models on a large scale, a variety of simplifying assumptions must generally be made regarding prevailing management strategies on the grid scale in both the baseline and future periods. Implementation differences in these and other modeling choices contribute to significant variation among global-scale crop model assessments in addition to differences in crop model implementations that also cause large differences in site-specific crop modeling (Asseng et al., 2013; Bassu et al., 2014).
Pouching a draining duodenal cutaneous fistula: a case study.
Zwanziger, P J
1999-01-01
Blockage of the mesenteric artery typically causes necrosis to the colon, requiring extensive surgical resection. In severe cases, the necrosis requires removal of the entire colon, creating numerous problems for the WOC nurse when pouching the opening created for effluent. This article describes the management of a draining duodenal fistula in a middle-aged woman, who survived surgery for a blocked mesenteric artery that necessitated the removal of the majority of the small and large intestine. Nutrition, skin management, and pouch options are described over a number of months as the fistula evolved and a stoma was created.
NASA Technical Reports Server (NTRS)
1974-01-01
An analysis was made to identify airplane research and technology necessary to ensure advanced transport aircraft the capability of accommodating forecast traffic without adverse impact on airport communities. Projections were made of the delay, noise, and emissions impact of future aircraft fleets on typical large urban airport. Design requirements, based on these projections, were developed for an advanced technology, long-haul, subsonic transport. A baseline aircraft was modified to fulfill the design requirements for terminal area compatibility. Technical and economic comparisons were made between these and other aircraft configured to support the study.
Dielectrics for long term space exposure and spacecraft charging: A briefing
NASA Technical Reports Server (NTRS)
Frederickson, A. R.
1989-01-01
Charging of dielectrics is a bulk, not a surface property. Radiation driven charge stops within the bulk and is not quickly conducted to the surface. Very large electric fields develop in the bulk due to this stopped charge. At space radiation levels, it typically requires hours or days for the internal electric fields to reach steady state. The resulting electric fields are large enough to produce electrical failure within the insulator. This type failure is thought to produce nearly all electric discharge anomalies. Radiation also induces bond breakage, creates reactive radicals, displaces atoms and, in general, severely changes the chemistry of the solid state material. Electric fields can alter this process by reacting with charged species, driving them through the solid. Irradiated polymers often lose as much as a percent of their mass, or more, at exposures typical in space. Very different aging or contaminant emission can be induced by the stopped charge electric fields. These radiation effects are detailed.
NASA Astrophysics Data System (ADS)
Geelen, Christopher D.; Wijnhoven, Rob G. J.; Dubbelman, Gijs; de With, Peter H. N.
2015-03-01
This research considers gender classification in surveillance environments, typically involving low-resolution images and a large amount of viewpoint variations and occlusions. Gender classification is inherently difficult due to the large intra-class variation and interclass correlation. We have developed a gender classification system, which is successfully evaluated on two novel datasets, which realistically consider the above conditions, typical for surveillance. The system reaches a mean accuracy of up to 90% and approaches our human baseline of 92.6%, proving a high-quality gender classification system. We also present an in-depth discussion of the fundamental differences between SVM and RF classifiers. We conclude that balancing the degree of randomization in any classifier is required for the highest classification accuracy. For our problem, an RF-SVM hybrid classifier exploiting the combination of HSV and LBP features results in the highest classification accuracy of 89.9 0.2%, while classification computation time is negligible compared to the detection time of pedestrians.
Hayes, Tyler R; Bang, Jae Jin; Davis, Tyson C; Peterson, Caroline F; McMillan, David G; Claridge, Shelley A
2017-10-18
As functionalized 2D materials are incorporated into hybrid materials, ensuring large-area structural control in noncovalently adsorbed films becomes increasingly important. Noncovalent functionalization avoids disrupting electronic structure in 2D materials; however, relatively weak molecular interactions in such monolayers typically reduce stability toward solution processing and other common material handling conditions. Here, we find that controlling substrate temperature during Langmuir-Schaefer conversion of a standing phase monolayer of diynoic amphiphiles on water to a horizontally oriented monolayer on a 2D substrate routinely produces multimicrometer domains, at least an order of magnitude larger than those typically achieved through drop-casting. Following polymerization, these highly ordered monolayers retain their structures during vigorous washing with solvents including water, ethanol, tetrahydrofuran, and toluene. These findings point to a convenient and broadly applicable strategy for noncovalent functionalization of 2D materials in applications that require large-area structural control, for instance, to minimize desorption at defects during subsequent solution processing.
An efficient strongly coupled immersed boundary method for deforming bodies
NASA Astrophysics Data System (ADS)
Goza, Andres; Colonius, Tim
2016-11-01
Immersed boundary methods treat the fluid and immersed solid with separate domains. As a result, a nonlinear interface constraint must be satisfied when these methods are applied to flow-structure interaction problems. This typically results in a large nonlinear system of equations that is difficult to solve efficiently. Often, this system is solved with a block Gauss-Seidel procedure, which is easy to implement but can require many iterations to converge for small solid-to-fluid mass ratios. Alternatively, a Newton-Raphson procedure can be used to solve the nonlinear system. This typically leads to convergence in a small number of iterations for arbitrary mass ratios, but involves the use of large Jacobian matrices. We present an immersed boundary formulation that, like the Newton-Raphson approach, uses a linearization of the system to perform iterations. It therefore inherits the same favorable convergence behavior. However, we avoid large Jacobian matrices by using a block LU factorization of the linearized system. We derive our method for general deforming surfaces and perform verification on 2D test problems of flow past beams. These test problems involve large amplitude flapping and a wide range of mass ratios. This work was partially supported by the Jet Propulsion Laboratory and Air Force Office of Scientific Research.
A framework for global river flood risk assessment
NASA Astrophysics Data System (ADS)
Winsemius, H. C.; Van Beek, L. P. H.; Bouwman, A.; Ward, P. J.; Jongman, B.
2012-04-01
There is an increasing need for strategic global assessments of flood risks. Such assessments may be required by: (a) International Financing Institutes and Disaster Management Agencies to evaluate where, when, and which investments in flood risk mitigation are most required; (b) (re-)insurers, who need to determine their required coverage capital; and (c) large companies to account for risks of regional investments. In this contribution, we propose a framework for global river flood risk assessment. The framework combines coarse scale resolution hazard probability distributions, derived from global hydrological model runs (typical scale about 0.5 degree resolution) with high resolution estimates of exposure indicators. The high resolution is required because floods typically occur at a much smaller scale than the typical resolution of global hydrological models, and exposure indicators such as population, land use and economic value generally are strongly variable in space and time. The framework therefore estimates hazard at a high resolution ( 1 km2) by using a) global forcing data sets of the current (or in scenario mode, future) climate; b) a global hydrological model; c) a global flood routing model, and d) importantly, a flood spatial downscaling routine. This results in probability distributions of annual flood extremes as an indicator of flood hazard, at the appropriate resolution. A second component of the framework combines the hazard probability distribution with classical flood impact models (e.g. damage, affected GDP, affected population) to establish indicators for flood risk. The framework can be applied with a large number of datasets and models and sensitivities of such choices can be evaluated by the user. The framework is applied using the global hydrological model PCR-GLOBWB, combined with a global flood routing model. Downscaling of the hazard probability distributions to 1 km2 resolution is performed with a new downscaling algorithm, applied on a number of target regions. We demonstrate the use of impact models in these regions based on global GDP, population, and land use maps. In this application, we show sensitivities of the estimated risks with regard to the use of different climate input datasets, decisions made in the downscaling algorithm, and different approaches to establish distributed estimates of GDP and asset exposure to flooding.
Conically scanned lidar telescope using holographic optical elements
NASA Technical Reports Server (NTRS)
Schwemmer, Geary K.; Wilkerson, Thomas D.
1992-01-01
Holographic optical elements (HOE) using volume phase holograms make possible a new class of lightweight scanning telescopes having advantages for lidar remote sensing instruments. So far, the only application of HOE's to lidar has been a non-scanning receiver for a laser range finder. We introduce a large aperture, narrow field of view (FOV) telescope used in a conical scanning configuration, having a much smaller rotating mass than in conventional designs. Typically, lidars employ a large aperture collector and require a narrow FOV to limit the amount of skylight background. Focal plane techniques are not good approaches to scanning because they require a large FOV within which to scan a smaller FOV mirror or detector array. Thus, scanning lidar systems have either used a large flat scanning mirror at which the receiver telescope is pointed, or the entire telescope is steered. We present a concept for a conically scanned lidar telescope in which the only moving part is the HOE which serves as the primary collecting optic. We also describe methods by which a multiplexed HOE can be used simultaneously as a dichroic beamsplitter.
Exploring Cloud Computing for Large-scale Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guang; Han, Binh; Yin, Jian
This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less
Power and Performance Trade-offs for Space Time Adaptive Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino
Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementationmore » on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.« less
Mirror coatings for large aperture UV optical infrared telescope optics
NASA Astrophysics Data System (ADS)
Balasubramanian, Kunjithapatham; Hennessy, John; Raouf, Nasrat; Nikzad, Shouleh; Del Hoyo, Javier; Quijada, Manuel
2017-09-01
Large space telescope concepts such as LUVOIR and HabEx aiming for observations from far UV to near IR require advanced coating technologies to enable efficient gathering of light with important spectral signatures including those in far UV region down to 90nm. Typical Aluminum mirrors protected with MgF2 fall short of the requirements below 120nm. New and improved coatings are sought to protect aluminum from oxidizing readily in normal environment causing severe absorption and reduction of reflectance in the deep UV. Choice of materials and the process of applying coatings present challenges. Here we present the progress achieved to date with experimental investigations of coatings at JPL and at GSFC and discuss the path forward to achieve high reflectance in the spectral region from 90 to 300nm without degrading performance in the visible and NIR regions taking into account durability concerns when the mirrors are exposed to normal laboratory environment as well as high humidity conditions. Reflectivity uniformity required on these mirrors is also discussed.
Hybrid estimation of complex systems.
Hofbaur, Michael W; Williams, Brian C
2004-10-01
Modern automated systems evolve both continuously and discretely, and hence require estimation techniques that go well beyond the capability of a typical Kalman Filter. Multiple model (MM) estimation schemes track these system evolutions by applying a bank of filters, one for each discrete system mode. Modern systems, however, are often composed of many interconnected components that exhibit rich behaviors, due to complex, system-wide interactions. Modeling these systems leads to complex stochastic hybrid models that capture the large number of operational and failure modes. This large number of modes makes a typical MM estimation approach infeasible for online estimation. This paper analyzes the shortcomings of MM estimation, and then introduces an alternative hybrid estimation scheme that can efficiently estimate complex systems with large number of modes. It utilizes search techniques from the toolkit of model-based reasoning in order to focus the estimation on the set of most likely modes, without missing symptoms that might be hidden amongst the system noise. In addition, we present a novel approach to hybrid estimation in the presence of unknown behavioral modes. This leads to an overall hybrid estimation scheme for complex systems that robustly copes with unforeseen situations in a degraded, but fail-safe manner.
The Reactivation of Motion influences Size Categorization in a Visuo-Haptic Illusion.
Rey, Amandine E; Dabic, Stephanie; Versace, Remy; Navarro, Jordan
2016-09-01
People simulate themselves moving when they view a picture, read a sentence, or simulate a situation that involves motion. The simulation of motion has often been studied in conceptual tasks such as language comprehension. However, most of these studies investigated the direct influence of motion simulation on tasks inducing motion. This article investigates whether a mo- tion induced by the reactivation of a dynamic picture can influence a task that did not require motion processing. In a first phase, a dynamic picture and a static picture were systematically presented with a vibrotactile stimulus (high or low frequency). The second phase of the experiment used a priming paradigm in which a vibrotactile stimulus was presented alone and followed by pictures of objects. Participants had to categorize objects as large or small relative to their typical size (simulated size). Results showed that when the target object was preceded by the vibrotactile stimulus previously associated with the dynamic picture, participants perceived all the objects as larger and categorized them more quickly when the objects were typically "large" and more slowly when the objects were typically "small." In light of embodied cognition theories, this bias in participants' perception is assumed to be caused by an induced forward motion. generated by the reactivated dynamic picture, which affects simulation of the size of the objects.
Universal and idiosyncratic characteristic lengths in bacterial genomes
NASA Astrophysics Data System (ADS)
Junier, Ivan; Frémont, Paul; Rivoire, Olivier
2018-05-01
In condensed matter physics, simplified descriptions are obtained by coarse-graining the features of a system at a certain characteristic length, defined as the typical length beyond which some properties are no longer correlated. From a physics standpoint, in vitro DNA has thus a characteristic length of 300 base pairs (bp), the Kuhn length of the molecule beyond which correlations in its orientations are typically lost. From a biology standpoint, in vivo DNA has a characteristic length of 1000 bp, the typical length of genes. Since bacteria live in very different physico-chemical conditions and since their genomes lack translational invariance, whether larger, universal characteristic lengths exist is a non-trivial question. Here, we examine this problem by leveraging the large number of fully sequenced genomes available in public databases. By analyzing GC content correlations and the evolutionary conservation of gene contexts (synteny) in hundreds of bacterial chromosomes, we conclude that a fundamental characteristic length around 10–20 kb can be defined. This characteristic length reflects elementary structures involved in the coordination of gene expression, which are present all along the genome of nearly all bacteria. Technically, reaching this conclusion required us to implement methods that are insensitive to the presence of large idiosyncratic genomic features, which may co-exist along these fundamental universal structures.
LES Investigation of Wake Development in a Transonic Fan Stage for Aeroacoustic Analysis
NASA Technical Reports Server (NTRS)
Hah, Chunill; Romeo, Michael
2017-01-01
Detailed development of the rotor wake and its interaction with the stator are investigated with a large eddy simulation (LES). Typical steady and unsteady Navier-Stokes approaches (RANS and URANS) do not calculate wake development accurately and do not provide all the necessary information for an aeroacoustic analysis. It is generally believed that higher fidelity analysis tools are required for an aeroacoustic investigation of transonic fan stages.
Towards a Methodology for Identifying Program Constraints During Requirements Analysis
NASA Technical Reports Server (NTRS)
Romo, Lilly; Gates, Ann Q.; Della-Piana, Connie Kubo
1997-01-01
Requirements analysis is the activity that involves determining the needs of the customer, identifying the services that the software system should provide and understanding the constraints on the solution. The result of this activity is a natural language document, typically referred to as the requirements definition document. Some of the problems that exist in defining requirements in large scale software projects includes synthesizing knowledge from various domain experts and communicating this information across multiple levels of personnel. One approach that addresses part of this problem is called context monitoring and involves identifying the properties of and relationships between objects that the system will manipulate. This paper examines several software development methodologies, discusses the support that each provide for eliciting such information from experts and specifying the information, and suggests refinements to these methodologies.
What limits the achievable areal densities of large aperture space telescopes?
NASA Astrophysics Data System (ADS)
Peterson, Lee D.; Hinkle, Jason D.
2005-08-01
This paper examines requirements trades involving areal density for large space telescope mirrors. A segmented mirror architecture is used to define a quantitative example that leads to relevant insight about the trades. In this architecture, the mirror consists of segments of non-structural optical elements held in place by a structural truss that rests behind the segments. An analysis is presented of the driving design requirements for typical on-orbit loads and ground-test loads. It is shown that the driving on-orbit load would be the resonance of the lowest mode of the mirror by a reaction wheel static unbalance. The driving ground-test load would be dynamics due to ground-induced random vibration. Two general conclusions are derived from these results. First, the areal density that can be allocated to the segments depends on the depth allocated to the structure. More depth in the structure allows the allocation of more mass to the segments. This, however, leads to large structural depth that might be a significant development challenge. Second, the requirement for ground-test-ability results in an order of magnitude or more depth in the structure than is required by the on-orbit loads. This leads to the proposition that avoiding ground test as a driving requirement should be a fundamental technology on par with the provision of deployable depth. Both are important structural challenges for these future systems.
Protein Folding Using a Vortex Fluidic Device.
Britton, Joshua; Smith, Joshua N; Raston, Colin L; Weiss, Gregory A
2017-01-01
Essentially all biochemistry and most molecular biology experiments require recombinant proteins. However, large, hydrophobic proteins typically aggregate into insoluble and misfolded species, and are directed into inclusion bodies. Current techniques to fold proteins recovered from inclusion bodies rely on denaturation followed by dialysis or rapid dilution. Such approaches can be time consuming, wasteful, and inefficient. Here, we describe rapid protein folding using a vortex fluidic device (VFD). This process uses mechanical energy introduced into thin films to rapidly and efficiently fold proteins. With the VFD in continuous flow mode, large volumes of protein solution can be processed per day with 100-fold reductions in both folding times and buffer volumes.
A fast time-difference inverse solver for 3D EIT with application to lung imaging.
Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut
2016-08-01
A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.
I'll take that to go: Big data bags and minimal identifiers for exchange of large, complex datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chard, Kyle; D'Arcy, Mike; Heavner, Benjamin D.
Big data workflows often require the assembly and exchange of complex, multi-element datasets. For example, in biomedical applications, the input to an analytic pipeline can be a dataset consisting thousands of images and genome sequences assembled from diverse repositories, requiring a description of the contents of the dataset in a concise and unambiguous form. Typical approaches to creating datasets for big data workflows assume that all data reside in a single location, requiring costly data marshaling and permitting errors of omission and commission because dataset members are not explicitly specified. We address these issues by proposing simple methods and toolsmore » for assembling, sharing, and analyzing large and complex datasets that scientists can easily integrate into their daily workflows. These tools combine a simple and robust method for describing data collections (BDBags), data descriptions (Research Objects), and simple persistent identifiers (Minids) to create a powerful ecosystem of tools and services for big data analysis and sharing. We present these tools and use biomedical case studies to illustrate their use for the rapid assembly, sharing, and analysis of large datasets.« less
Korir, Geoffrey; Karam, P Andrew
2018-06-11
In the event of a significant radiological release in a major urban area where a large number of people reside, it is inevitable that radiological screening and dose assessment must be conducted. Lives may be saved if an emergency response plan and radiological screening method are established for use in such cases. Thousands to tens of thousands of people might present themselves with some levels of external contamination and/or the potential for internal contamination. Each of these individuals will require varying degrees of radiological screening, and those with a high likelihood of internal and/or external contamination will require radiological assessment to determine the need for medical attention and decontamination. This sort of radiological assessment typically requires skilled health physicists, but there are insufficient numbers of health physicists in any city to perform this function for large populations, especially since many (e.g., those at medical facilities) are likely to be engaged at their designated institutions. The aim of this paper is therefore to develop and describe the technical basis for a novel, scoring-based methodology that can be used by non-health physicists for performing radiological assessment during such radiological events.
High-accuracy single-pass InSAR DEM for large-scale flood hazard applications
NASA Astrophysics Data System (ADS)
Schumann, G.; Faherty, D.; Moller, D.
2017-12-01
In this study, we used a unique opportunity of the GLISTIN-A (NASA airborne mission designed to characterizing the cryosphere) track to Greenland to acquire a high-resolution InSAR DEM of a large area in the Red River of the North Basin (north of Grand Forks, ND, USA), which is a very flood-vulnerable valley, particularly in spring time due to increased soil moisture content near state of saturation and/or, typical for this region, snowmelt. Having an InSAR DEM that meets flood inundation modeling and mapping requirements comparable to LiDAR, would demonstrate great application potential of new radar technology for national agencies with an operational flood forecasting mandate and also local state governments active in flood event prediction, disaster response and mitigation. Specifically, we derived a bare-earth DEM in SAR geometry by first removing the inherent far range bias related to airborne operation, which at the more typical large-scale DEM resolution of 30 m has a sensor accuracy of plus or minus 2.5 cm. Subsequently, an intelligent classifier based on informed relationships between InSAR height, intensity and correlation was used to distinguish between bare-earth, roads or embankments, buildings and tall vegetation in order to facilitate the creation of a bare-earth DEM that would meet the requirements for accurate floodplain inundation mapping. Using state-of-the-art LiDAR terrain data, we demonstrate that capability by achieving a root mean squared error of approximately 25 cm and further illustrating its applicability to flood modeling.
Hydrocode simulations of air and water shocks for facility vulnerability assessments.
Clutter, J Keith; Stahl, Michael
2004-01-02
Hydrocodes are widely used in the study of explosive systems but their use in routine facility vulnerability assessments has been limited due to the computational resources typically required. These requirements are due to the fact that the majority of hydrocodes have been developed primarily for the simulation of weapon-scale phenomena. It is not practical to use these same numerical frameworks on the large domains found in facility vulnerability studies. Here, a hydrocode formulated specifically for facility vulnerability assessments is reviewed. Techniques used to accurately represent the explosive source while maintaining computational efficiency are described. Submodels for addressing other issues found in typical terrorist attack scenarios are presented. In terrorist attack scenarios, loads produced by shocks play an important role in vulnerability. Due to the difference in the material properties of water and air and interface phenomena, there exists significant contrast in wave propagation phenomena in these two medium. These physical variations also require special attention be paid to the mathematical and numerical models used in the hydrocodes. Simulations for a variety of air and water shock scenarios are presented to validate the computational models used in the hydrocode and highlight the phenomenological issues.
Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?
NASA Technical Reports Server (NTRS)
Lum, Karen; Hihn, Jairus; Menzies, Tim
2006-01-01
While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.
Managing the Nuclear Fuel Cycle: Policy Implications of Expanding Global Access to Nuclear Power
2008-09-03
Spent nuclear fuel disposal has remained the most critical aspect of the nuclear fuel cycle for the United States, where longstanding nonproliferation...inalienable right and by and large, neither have U.S. government officials. However, the case of Iran raises perhaps the most critical question in...the enrichment process can take advantage of the slight difference in atomic mass between 235U and 238U. The typical enrichment process requires
Effects of rotation on coolant passage heat transfer. Volume 1: Coolant passages with smooth walls
NASA Technical Reports Server (NTRS)
Hajek, T. J.; Wagner, J. H.; Johnson, B. V.; Higgins, A. W.; Steuber, G. D.
1991-01-01
An experimental program was conducted to investigate heat transfer and pressure loss characteristics of rotating multipass passages, for configurations and dimensions typical of modern turbine blades. The immediate objective was the generation of a data base of heat transfer and pressure loss data required to develop heat transfer correlations and to assess computational fluid dynamic techniques for rotating coolant passages. Experiments were conducted in a smooth wall large scale heat transfer model.
Leif Mortenson
2015-01-01
Globally, national forest inventories (NFI) require a large work force typically consisting of multiple teams spread across multiple locations in order to successfully capture a given nationâs forest resources. This is true of the Forest Inventory and Analysis (FIA) program in the US and in many inventories in developing countries that are supported by USFS...
Parallel processing implementations of a contextual classifier for multispectral remote sensing data
NASA Technical Reports Server (NTRS)
Siegel, H. J.; Swain, P. H.; Smith, B. W.
1980-01-01
Contextual classifiers are being developed as a method to exploit the spatial/spectral context of a pixel to achieve accurate classification. Classification algorithms such as the contextual classifier typically require large amounts of computation time. One way to reduce the execution time of these tasks is through the use of parallelism. The applicability of the CDC flexible processor system and of a proposed multimicroprocessor system (PASM) for implementing contextual classifiers is examined.
McClenaghan, Joseph; Garofalo, Andrea M.; Meneghini, Orso; ...
2017-08-03
In this study, transport modeling of a proposed ITER steady-state scenario based on DIII-D high poloidal-beta (more » $${{\\beta}_{p}}$$ ) discharges finds that ITB formation can occur with either sufficient rotation or a negative central shear q-profile. The high $${{\\beta}_{p}}$$ scenario is characterized by a large bootstrap current fraction (80%) which reduces the demands on the external current drive, and a large radius internal transport barrier which is associated with excellent normalized confinement. Modeling predictions of the electron transport in the high $${{\\beta}_{p}}$$ scenario improve as $${{q}_{95}}$$ approaches levels similar to typical existing models of ITER steady-state and the ion transport is turbulence dominated. Typical temperature and density profiles from the non-inductive high $${{\\beta}_{p}}$$ scenario on DIII-D are scaled according to 0D modeling predictions of the requirements for achieving a $Q=5$ steady-state fusion gain in ITER with 'day one' heating and current drive capabilities. Then, TGLF turbulence modeling is carried out under systematic variations of the toroidal rotation and the core q-profile. A high bootstrap fraction, high $${{\\beta}_{p}}$$ scenario is found to be near an ITB formation threshold, and either strong negative central magnetic shear or rotation in a high bootstrap fraction are found to successfully provide the turbulence suppression required to achieve $Q=5$.« less
Tipping elements in the Arctic marine ecosystem.
Duarte, Carlos M; Agustí, Susana; Wassmann, Paul; Arrieta, Jesús M; Alcaraz, Miquel; Coello, Alexandra; Marbà, Núria; Hendriks, Iris E; Holding, Johnna; García-Zarandona, Iñigo; Kritzberg, Emma; Vaqué, Dolors
2012-02-01
The Arctic marine ecosystem contains multiple elements that present alternative states. The most obvious of which is an Arctic Ocean largely covered by an ice sheet in summer versus one largely devoid of such cover. Ecosystems under pressure typically shift between such alternative states in an abrupt, rather than smooth manner, with the level of forcing required for shifting this status termed threshold or tipping point. Loss of Arctic ice due to anthropogenic climate change is accelerating, with the extent of Arctic sea ice displaying increased variance at present, a leading indicator of the proximity of a possible tipping point. Reduced ice extent is expected, in turn, to trigger a number of additional tipping elements, physical, chemical, and biological, in motion, with potentially large impacts on the Arctic marine ecosystem.
Range Performance of Bombers Powered by Turbine-Propeller Power Plants
NASA Technical Reports Server (NTRS)
Cline, Charles W.
1950-01-01
Calculations have been made to find range? attainable by bombers of gross weights from l40,000 to 300,000 pounds powered by turbine-propeller power plants. Only conventional configurations were considered and emphasis was placed upon using data for structural and aerodynamic characteristics which are typical of modern military airplanes. An effort was made to limit the various parameters invoked in the airplane configuration to practical values. Therefore, extremely high wing loadings, large amounts of sweepback, and very high aspect ratios have not been considered. Power-plant performance was based upon the performance of a typical turbine-propeller engine equipped with propellers designed to maintain high efficiencies at high-subsonic speeds. Results indicated, in general, that the greatest range, for a given gross weight, is obtained by airplanes of high wing loading, unless the higher cruising speeds associated with the high-wing-loading airplanes require-the use of thinner wing sections. Further results showed the effect of cruising at-high speeds, of operation at very high altitudes, and of carrying large bomb loads.
A Rigid Mid-Lift-to-Drag Ratio Approach to Human Mars Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Cerimele, Christopher J.; Robertson, Edward A.; Sostaric, Ronald R.; Campbell, Charles H.; Robinson, Phil; Matz, Daniel A.; Johnson, Breanna J.; Stachowiak, Susan J.; Garcia, Joseph A.; Bowles, Jeffrey V.;
2017-01-01
Current NASA Human Mars architectures require delivery of approximately 20 metric tons of cargo to the surface in a single landing. A proposed vehicle type for performing the entry, descent, and landing at Mars associated with this architecture is a rigid, enclosed, elongated lifting body shape that provides a higher lift-to-drag ratio (L/D) than a typical entry capsule, but lower than a typical winged entry vehicle (such as the Space Shuttle Orbiter). A rigid Mid-L/D shape has advantages for large mass Mars EDL, including loads management, range capability during entry, and human spaceflight heritage. Previous large mass Mars studies have focused more on symmetric and/or circular cross-section Mid-L/D shapes such as the ellipsled. More recent work has shown performance advantages for non-circular cross section shapes. This paper will describe efforts to design a rigid Mid-L/D entry vehicle for Mars which shows mass and performance improvements over previous Mid-L/D studies. The proposed concept, work to date and evolution, forward path, and suggested future strategy are described.
Weak lensing calibration of mass bias in the REFLEX+BCS X-ray galaxy cluster catalogue
NASA Astrophysics Data System (ADS)
Simet, Melanie; Battaglia, Nicholas; Mandelbaum, Rachel; Seljak, Uroš
2017-04-01
The use of large, X-ray-selected Galaxy cluster catalogues for cosmological analyses requires a thorough understanding of the X-ray mass estimates. Weak gravitational lensing is an ideal method to shed light on such issues, due to its insensitivity to the cluster dynamical state. We perform a weak lensing calibration of 166 galaxy clusters from the REFLEX and BCS cluster catalogue and compare our results to the X-ray masses based on scaled luminosities from that catalogue. To interpret the weak lensing signal in terms of cluster masses, we compare the lensing signal to simple theoretical Navarro-Frenk-White models and to simulated cluster lensing profiles, including complications such as cluster substructure, projected large-scale structure and Eddington bias. We find evidence of underestimation in the X-ray masses, as expected, with
NASA Astrophysics Data System (ADS)
Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng
2013-03-01
Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.
A global probabilistic tsunami hazard assessment from earthquake sources
Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana
2017-01-01
Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate.
Managing the Nuclear Fuel Cycle: Policy Implications of Expanding Global Access to Nuclear Power
2009-07-01
inalienable right and, by and large, neither have U.S. government officials. However, the case of Iran raises perhaps the most critical question in this...slight difference in atomic mass between 235U and 238U. The typical enrichment process requires about 10 lbs of uranium U3O8 to produce 1 lb of low...thermal neutrons but can induce fission in all actinides , including all plutonium isotopes. Therefore, nuclear fuel for a fast reactor must have a
Mira: Argonne's 10-petaflops supercomputer
Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul
2018-02-13
Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.
Mira: Argonne's 10-petaflops supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papka, Michael; Coghlan, Susan; Isaacs, Eric
2013-07-03
Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.
Control law synthesis and optimization software for large order aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Mukhopadhyay, V.; Pototzky, A.; Noll, Thomas
1989-01-01
A flexible aircraft or space structure with active control is typically modeled by a large-order state space system of equations in order to accurately represent the rigid and flexible body modes, unsteady aerodynamic forces, actuator dynamics and gust spectra. The control law of this multi-input/multi-output (MIMO) system is expected to satisfy multiple design requirements on the dynamic loads, responses, actuator deflection and rate limitations, as well as maintain certain stability margins, yet should be simple enough to be implemented on an onboard digital microprocessor. A software package for performing an analog or digital control law synthesis for such a system, using optimal control theory and constrained optimization techniques is described.
Passenger Transmitters as A Possible Cause of Aircraft Fuel Ignition
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Ely, Jay J.; Dudley, Kenneth L.; Scearce, Stephen A.; Hatfield, Michael O.; Richardson, Robert E.
2006-01-01
An investigation was performed to study the potential for radio frequency (RF) power radiated from transmitting Portable Electronic Devices (PEDs) to create an arcing/sparking event within the fuel tank of a large transport aircraft. A survey of RF emissions from typical intentional transmitting PEDs was first performed. Aircraft measurements of RF coupling to the fuel tank and its wiring were also performed to determine the PEDs induced power on the wiring, and the re-radiated power within the fuel tank. Laboratory simulations were conducted to determine the required RF power level for an arcing/sparking event. Data analysis shows large positive safety margins, even with simulated faults on the wiring.
Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz
This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledgemore » of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.« less
Factorization in large-scale many-body calculations
Johnson, Calvin W.; Ormand, W. Erich; Krastev, Plamen G.
2013-08-07
One approach for solving interacting many-fermion systems is the configuration-interaction method, also sometimes called the interacting shell model, where one finds eigenvalues of the Hamiltonian in a many-body basis of Slater determinants (antisymmetrized products of single-particle wavefunctions). The resulting Hamiltonian matrix is typically very sparse, but for large systems the nonzero matrix elements can nonetheless require terabytes or more of storage. An alternate algorithm, applicable to a broad class of systems with symmetry, in our case rotational invariance, is to exactly factorize both the basis and the interaction using additive/multiplicative quantum numbers; such an algorithm recreates the many-body matrix elementsmore » on the fly and can reduce the storage requirements by an order of magnitude or more. Here, we discuss factorization in general and introduce a novel, generalized factorization method, essentially a ‘double-factorization’ which speeds up basis generation and set-up of required arrays. Although we emphasize techniques, we also place factorization in the context of a specific (unpublished) configuration-interaction code, BIGSTICK, which runs both on serial and parallel machines, and discuss the savings in memory due to factorization.« less
NASA Astrophysics Data System (ADS)
Moore, T. S.; Sanderman, J.; Baldock, J.; Plante, A. F.
2016-12-01
National-scale inventories typically include soil organic carbon (SOC) content, but not chemical composition or biogeochemical stability. Australia's Soil Carbon Research Programme (SCaRP) represents a national inventory of SOC content and composition in agricultural systems. The program used physical fractionation followed by 13C nuclear magnetic resonance (NMR) spectroscopy. While these techniques are highly effective, they are typically too expensive and time consuming for use in large-scale SOC monitoring. We seek to understand if analytical thermal analysis is a viable alternative. Coupled differential scanning calorimetry (DSC) and evolved gas analysis (CO2- and H2O-EGA) yields valuable data on SOC composition and stability via ramped combustion. The technique requires little training to use, and does not require fractionation or other sample pre-treatment. We analyzed 300 agricultural samples collected by SCaRP, divided into four fractions: whole soil, coarse particulates (POM), untreated mineral associated (HUM), and hydrofluoric acid (HF)-treated HUM. All samples were analyzed by DSC-EGA, but only the POM and HF-HUM fractions were analyzed by NMR. Multivariate statistical analyses were used to explore natural clustering in SOC composition and stability based on DSC-EGA data. A partial least-squares regression (PLSR) model was used to explore correlations among the NMR and DSC-EGA data. Correlations demonstrated regions of combustion attributable to specific functional groups, which may relate to SOC stability. We are increasingly challenged with developing an efficient technique to assess SOC composition and stability at large spatial and temporal scales. Correlations between NMR and DSC-EGA may demonstrate the viability of using thermal analysis in lieu of more demanding methods in future large-scale surveys, and may provide data that goes beyond chemical composition to better approach quantification of biogeochemical stability.
NASA Technical Reports Server (NTRS)
Beernink, Kevin; Guha, Subhendu; Yang, Jeff; Banerjee, Arindam; Lord, Ken; DeMaggio, Greg; Liu, Frank; Pietka, Ginger; Johnson, Todd; Reinhout, Melanie;
2007-01-01
The availability of low-cost, lightweight and reliable photovoltaic (PV) modules is an important component in reducing the cost of satellites and spacecraft. In addition, future high-power spacecraft will require lightweight PV arrays with reduced stowage volume. In terms of the requirements for low mass, reduced stowage volume, and the harsh space environment, thin film amorphous silicon (a-Si) alloy cells have several advantages over other material technologies (1). The deposition process is relatively simple, inexpensive, and applicable to large area, lightweight, flexible substrates. The temperature coefficient has been found to be between -0.2 and -0.3 %/degC for high-efficiency triple-junction a-Si alloy cells, which is superior for high temperature operation compared to crystalline Si and triple-junction GaAs/InGaP/Ge devices at 0.53 %/degC and 0.45 %/degC, respectively (2). As a result, the reduction in efficiency at high temperature typical in space conditions is less for a-Si alloy cells than for their crystalline counterparts. Additionally, the a-Si alloy cells are relatively insensitive to electron and proton bombardment. We have shown that defects that are created by electrons with energies between 0.2 to 2 MeV with fluence up to 1x10(exp 15) e/sq cm and by protons with energy in the range 0.3 MeV to 5 MeV with fluence up to 1x10(exp 13) p/sq cm can be annealed out at 70 C in less than 50 hours (1). Further, modules incorporating United Solar s a-Si alloy cells have been tested on the MIR space station for 19 months with only minimal degradation (3). For stratospheric applications, such as the high altitude airship, the required PV arrays are typically of considerably higher power than current space arrays. Airships typically have a large area available for the PV, but weight is of critical importance. As a result, low cost and high specific power (W/kg) are key factors for airship PV arrays. Again, thin-film a-Si alloy solar cell technology is well suited to such applications.
An effective fractal-tree closure model for simulating blood flow in large arterial networks.
Perdikaris, Paris; Grinberg, Leopold; Karniadakis, George Em
2015-06-01
The aim of the present work is to address the closure problem for hemodynamic simulations by developing a flexible and effective model that accurately distributes flow in the downstream vasculature and can stably provide a physiological pressure outflow boundary condition. To achieve this goal, we model blood flow in the sub-pixel vasculature by using a non-linear 1D model in self-similar networks of compliant arteries that mimic the structure and hierarchy of vessels in the meso-vascular regime (radii [Formula: see text]). We introduce a variable vessel length-to-radius ratio for small arteries and arterioles, while also addressing non-Newtonian blood rheology and arterial wall viscoelasticity effects in small arteries and arterioles. This methodology aims to overcome substantial cut-off radius sensitivities, typically arising in structured tree and linearized impedance models. The proposed model is not sensitive to outflow boundary conditions applied at the end points of the fractal network, and thus does not require calibration of resistance/capacitance parameters typically required for outflow conditions. The proposed model convergences to a periodic state in two cardiac cycles even when started from zero-flow initial conditions. The resulting fractal-trees typically consist of thousands to millions of arteries, posing the need for efficient parallel algorithms. To this end, we have scaled up a Discontinuous Galerkin solver that utilizes the MPI/OpenMP hybrid programming paradigm to thousands of computer cores, and can simulate blood flow in networks of millions of arterial segments at the rate of one cycle per 5 min. The proposed model has been extensively tested on a large and complex cranial network with 50 parent, patient-specific arteries and 21 outlets to which fractal trees where attached, resulting to a network of up to 4,392,484 vessels in total, and a detailed network of the arm with 276 parent arteries and 103 outlets (a total of 702,188 vessels after attaching the fractal trees), returning physiological flow and pressure wave predictions without requiring any parameter estimation or calibration procedures. We present a novel methodology to overcome substantial cut-off radius sensitivities.
An efective fractal-tree closure model for simulating blood flow in large arterial networks
Perdikaris, Paris; Grinberg, Leopold; Karniadakis, George Em.
2014-01-01
The aim of the present work is to address the closure problem for hemodynamic simulations by developing a exible and effective model that accurately distributes flow in the downstream vasculature and can stably provide a physiological pressure out flow boundary condition. To achieve this goal, we model blood flow in the sub-pixel vasculature by using a non-linear 1D model in self-similar networks of compliant arteries that mimic the structure and hierarchy of vessels in the meso-vascular regime (radii 500 μm – 10 μm). We introduce a variable vessel length-to-radius ratio for small arteries and arterioles, while also addressing non-Newtonian blood rheology and arterial wall viscoelasticity effects in small arteries and arterioles. This methodology aims to overcome substantial cut-off radius sensitivities, typically arising in structured tree and linearized impedance models. The proposed model is not sensitive to out flow boundary conditions applied at the end points of the fractal network, and thus does not require calibration of resistance/capacitance parameters typically required for out flow conditions. The proposed model convergences to a periodic state in two cardiac cycles even when started from zero-flow initial conditions. The resulting fractal-trees typically consist of thousands to millions of arteries, posing the need for efficient parallel algorithms. To this end, we have scaled up a Discontinuous Galerkin solver that utilizes the MPI/OpenMP hybrid programming paradigm to thousands of computer cores, and can simulate blood flow in networks of millions of arterial segments at the rate of one cycle per 5 minutes. The proposed model has been extensively tested on a large and complex cranial network with 50 parent, patient-specific arteries and 21 outlets to which fractal trees where attached, resulting to a network of up to 4,392,484 vessels in total, and a detailed network of the arm with 276 parent arteries and 103 outlets (a total of 702,188 vessels after attaching the fractal trees), returning physiological flow and pressure wave predictions without requiring any parameter estimation or calibration procedures. We present a novel methodology to overcome substantial cut-off radius sensitivities PMID:25510364
Gabriel, Lucinda E K; Webb, Steve A R
2013-10-01
Influenza pandemics occur intermittently and represent an existential global infectious diseases threat. The purpose of this review is to describe clinical and research preparedness for future pandemics. Pandemic influenza typically results in large numbers of individuals with life-threatening pneumonia requiring treatment in ICUs. Clinical preparedness of ICUs relates to planning to provide increased 'surge' capacity to meet increased demand and requires consideration of staffing, equipment and consumables, bed-space availability and management systems. Research preparedness is also necessary, as timely clinical research has the potential to change the trajectory of a pandemic. The clinical research response during the 2009 H1N1 influenza pandemic was suboptimal. Better planning is necessary to optimize both clinical and research responses to future pandemics.
Bellis, Mark A; Hughes, Karen; Jones, Lisa; Morleo, Michela; Nicholls, James; McCoy, Ellie; Webster, Jane; Sumnall, Harry
2015-05-22
Accurate measures of alcohol consumption are critical in assessing health harms caused by alcohol. In many countries, there are large discrepancies between survey-based measures of consumption and those based on alcohol sales. In England, surveys measuring typical alcohol consumption account for only around 60% of alcohol sold. Here, using a national survey, we measure both typical drinking and atypical/special occasion drinking (i.e., feasting and fasting) in order to develop more complete measures of alcohol consumption. A national random probability telephone survey was implemented (May 2013 to April 2014). Inclusion criteria were resident in England and aged 16 years or over. Respondents (n = 6,085) provided information on typical drinking (amounts per day, drinking frequency) and changes in consumption associated with routine atypical days (e.g., Friday nights) and special dinking periods (e.g., holidays) and events (e.g., weddings). Generalized linear modelling was used to identify additional alcohol consumption associated with atypical/special occasion drinking by age, sex, and typical drinking level. Accounting for atypical/special occasion drinking added more than 120 million UK units of alcohol/week (~12 million bottles of wine) to population alcohol consumption in England. The greatest impact was seen among 25- to 34-year-olds with the highest typical consumption, where atypical/special occasions added approximately 18 units/week (144 g) for both sexes. Those reporting the lowest typical consumption (≤1 unit/week) showed large relative increases in consumption (209.3%) with most drinking associated with special occasions. In some demographics, adjusting for special occasions resulted in overall reductions in annual consumption (e.g., females, 65 to 74 years in the highest typical drinking category). Typical drinking alone can be a poor proxy for actual alcohol consumption. Accounting for atypical/special occasion drinking fills 41.6% of the gap between surveyed consumption and national sales in England. These additional units are inevitably linked to increases in lifetime risk of alcohol-related disease and injury, particularly as special occasions often constitute heavy drinking episodes. Better population measures of celebratory, festival, and holiday drinking are required in national surveys in order to adequately measure both alcohol consumption and the health harms associated with special occasion drinking.
Three-phase flow? Consider helical-coil heat exchangers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haraburda, S.S.
1995-07-01
In recent years, chemical process plants are increasingly encountering processes that require heat exchange in three-phase fluids. A typical application, for example, is heating liquids containing solid catalyst particles and non-condensable gases. Heat exchangers designed for three-phase flow generally have tubes with large diameters (typically greater than two inches), because solids can build-up inside the tube and lead to plugging. At the same time, in order to keep heat-transfer coefficients high, the velocity of the process fluid within the tube should also be high. As a result, heat exchangers for three-phase flow may require less than five tubes -- eachmore » having a required linear length that could exceed several hundred feet. Given these limitations, it is obvious that a basic shell-and-tube heat exchanger is not the most practical solution for this purpose. An alternative for three-phase flow is a helical-coil heat exchanger. The helical-coil units offer a number of advantages, including perpendicular, counter-current flow and flexible overall dimensions for the exchanger itself. The paper presents equations for: calculating the tube-side heat-transfer coefficient; calculating the shell-side heat-transfer coefficient; calculating the heat-exchanger size; calculating the tube-side pressure drop; and calculating shell-side pressure-drop.« less
Alvarez, Carlos M; Urakov, Timur M; Vanni, Steven
2018-03-01
Pseudomeningocele is a rare but well-known complication of lumbar spine surgery, which arises in 0.068%-0.1% of individuals in large series of patients undergoing laminectomy and in up to 2% of patients with postlaminectomy symptoms. In symptomatic pseudomeningoceles, surgical reexploration and repair of the dural defect are typically necessary. Whereas the goals of pseudomeningocele repair, which are extirpation of the pseudomeningocele cavity and elimination of extradural dead space, can typically be achieved by primary closure performed using nonabsorbable sutures, giant pseudomeningoceles (> 8 cm) can require more elaborate repair in which fibrin glues, dural substitute, myofascial flaps, or all of the above are used. The authors present 2 cases of postsurgical symptomatic giant pseudomeningoceles that were repaired using a fast-resorbing polymer mesh-supported reconstruction technique, which is described here for the first time.
Recent developments in membrane-based separations in biotechnology processes: review.
Rathore, A S; Shirke, A
2011-01-01
Membrane-based separations are the most ubiquitous unit operations in biotech processes. There are several key reasons for this. First, they can be used with a large variety of applications including clarification, concentration, buffer exchange, purification, and sterilization. Second, they are available in a variety of formats, such as depth filtration, ultrafiltration, diafiltration, nanofiltration, reverse osmosis, and microfiltration. Third, they are simple to operate and are generally robust toward normal variations in feed material and operating parameters. Fourth, membrane-based separations typically require lower capital cost when compared to other processing options. As a result of these advantages, a typical biotech process has anywhere from 10 to 20 membrane-based separation steps. In this article we review the major developments that have occurred on this topic with a focus on developments in the last 5 years.
Shilov, Ignat V; Seymour, Sean L; Patel, Alpesh A; Loboda, Alex; Tang, Wilfred H; Keating, Sean P; Hunter, Christie L; Nuwaysir, Lydia M; Schaeffer, Daniel A
2007-09-01
The Paragon Algorithm, a novel database search engine for the identification of peptides from tandem mass spectrometry data, is presented. Sequence Temperature Values are computed using a sequence tag algorithm, allowing the degree of implication by an MS/MS spectrum of each region of a database to be determined on a continuum. Counter to conventional approaches, features such as modifications, substitutions, and cleavage events are modeled with probabilities rather than by discrete user-controlled settings to consider or not consider a feature. The use of feature probabilities in conjunction with Sequence Temperature Values allows for a very large increase in the effective search space with only a very small increase in the actual number of hypotheses that must be scored. The algorithm has a new kind of user interface that removes the user expertise requirement, presenting control settings in the language of the laboratory that are translated to optimal algorithmic settings. To validate this new algorithm, a comparison with Mascot is presented for a series of analogous searches to explore the relative impact of increasing search space probed with Mascot by relaxing the tryptic digestion conformance requirements from trypsin to semitrypsin to no enzyme and with the Paragon Algorithm using its Rapid mode and Thorough mode with and without tryptic specificity. Although they performed similarly for small search space, dramatic differences were observed in large search space. With the Paragon Algorithm, hundreds of biological and artifact modifications, all possible substitutions, and all levels of conformance to the expected digestion pattern can be searched in a single search step, yet the typical cost in search time is only 2-5 times that of conventional small search space. Despite this large increase in effective search space, there is no drastic loss of discrimination that typically accompanies the exploration of large search space.
Influence of strain on dislocation core in silicon
NASA Astrophysics Data System (ADS)
Pizzagalli, L.; Godet, J.; Brochard, S.
2018-05-01
First principles, density functional-based tight binding and semi-empirical interatomic potentials calculations are performed to analyse the influence of large strains on the structure and stability of a 60? dislocation in silicon. Such strains typically arise during the mechanical testing of nanostructures like nanopillars or nanoparticles. We focus on bi-axial strains in the plane normal to the dislocation line. Our calculations surprisingly reveal that the dislocation core structure largely depends on the applied strain, for strain levels of about 5%. In the particular case of bi-axial compression, the transformation of the dislocation to a locally disordered configuration occurs for similar strain magnitudes. The formation of an opening, however, requires larger strains, of about 7.5%. Furthermore, our results suggest that electronic structure methods should be favoured to model dislocation cores in case of large strains whenever possible.
Temporal Cyber Attack Detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingram, Joey Burton; Draelos, Timothy J.; Galiardi, Meghan
Rigorous characterization of the performance and generalization ability of cyber defense systems is extremely difficult, making it hard to gauge uncertainty, and thus, confidence. This difficulty largely stems from a lack of labeled attack data that fully explores the potential adversarial space. Currently, performance of cyber defense systems is typically evaluated in a qualitative manner by manually inspecting the results of the system on live data and adjusting as needed. Additionally, machine learning has shown promise in deriving models that automatically learn indicators of compromise that are more robust than analyst-derived detectors. However, to generate these models, most algorithms requiremore » large amounts of labeled data (i.e., examples of attacks). Algorithms that do not require annotated data to derive models are similarly at a disadvantage, because labeled data is still necessary when evaluating performance. In this work, we explore the use of temporal generative models to learn cyber attack graph representations and automatically generate data for experimentation and evaluation. Training and evaluating cyber systems and machine learning models requires significant, annotated data, which is typically collected and labeled by hand for one-off experiments. Automatically generating such data helps derive/evaluate detection models and ensures reproducibility of results. Experimentally, we demonstrate the efficacy of generative sequence analysis techniques on learning the structure of attack graphs, based on a realistic example. These derived models can then be used to generate more data. Additionally, we provide a roadmap for future research efforts in this area.« less
Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M; Begeer, Sander
2014-09-01
The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion expression they rely on more deliberate, more time-consuming strategies in order to accurately recognize emotion expressions when compared to typically developing children. In the current study, we examine both emotion recognition accuracy and response time in a large sample of children, and explore the moderating influence of verbal ability on these findings. The sample consisted of 86 children with ASD (M age = 10.65) and 114 typically developing children (M age = 10.32) between 7 and 13 years of age. All children completed a pre-test (emotion word-word matching), and test phase consisting of basic emotion recognition, whereby they were required to match a target emotion expression to the correct emotion word; accuracy and response time were recorded. Verbal IQ was controlled for in the analyses. We found no evidence of a systematic deficit in emotion recognition accuracy or response time for children with ASD, controlling for verbal ability. However, when controlling for children's accuracy in word-word matching, children with ASD had significantly lower emotion recognition accuracy when compared to typically developing children. The findings suggest that the social impairments observed in children with ASD are not the result of marked deficits in basic emotion recognition accuracy or longer response times. However, children with ASD may be relying on other perceptual skills (such as advanced word-word matching) to complete emotion recognition tasks at a similar level as typically developing children.
Defect printability for high-exposure dose advanced packaging applications
NASA Astrophysics Data System (ADS)
Mikles, Max; Flack, Warren; Nguyen, Ha-Ai; Schurz, Dan
2003-12-01
Pellicles are used in semiconductor lithography to minimize printable defects and reduce reticle cleaning frequency. However, there are a growing number of microlithography applications, such as advanced packaging and nanotechnology, where it is not clear that pellicles always offer a significant benefit. These applications have relatively large critical dimensions and require ultra thick photoresists with extremely high exposure doses. Given that the lithography is performed in Class 100 cleanroom conditions, it is possible that the risk of defects from contamination is sufficiently low that pellicles would not be required on certain process layer reticles. The elimination of the pellicle requirement would provide a cost reduction by saving the original pellicle cost and eliminating future pellicle replacement and repair costs. This study examines the imaging potential of defects with reticle patterns and processes typical for gold-bump and solder-bump advanced packaging lithography. The test reticle consists of 30 to 90 μm octagonal contact patterns representative of advanced packaging reticles. Programmed defects are added that represent the range of particle sizes (3 to 30 μm) normally protected by the pellicle and that are typical of advanced packaging lithography cleanrooms. The reticle is exposed using an Ultratech Saturn Spectrum 300e2 1X stepper on wafers coated with a variety of ultra thick (30 to 100 μm) positive and negative-acting photoresists commonly used in advanced packaging. The experimental results show that in many cases smaller particles continue to be yield issues for the feature size and density typical of advanced packaging processes. For the two negative photoresists studied it appears that a pellicle is not required for protection from defects smaller than 10 to 15 μm depending on the photoresist thickness. Thus the decision on pellicle usage for these materials would need to be made based on the device fabrication process and the cleanliness of a fabrication facility. For the two positive photoresists studied it appears that a pellicle is required to protect from defects down to 3 μm defects depending on the photoresist thickness. This suggests that a pellicle should always be used for these materials. Since a typical fabrication facility would use both positive and negative photoresists it may be advantageous to use pellicles on all reticles simply to avoid confusion. The cost savings of not using a pellicle could easily be outweighed by the yield benefits of using one.
NASA Astrophysics Data System (ADS)
Vicuña, Cristián Molina; Höweler, Christoph
2017-12-01
The use of AE in machine failure diagnosis has increased over the last years. Most AE-based failure diagnosis strategies use digital signal processing and thus require the sampling of AE signals. High sampling rates are required for this purpose (e.g. 2 MHz or higher), leading to streams of large amounts of data. This situation is aggravated if fine resolution and/or multiple sensors are required. These facts combine to produce bulky data, typically in the range of GBytes, for which sufficient storage space and efficient signal processing algorithms are required. This situation probably explains why, in practice, AE-based methods consist mostly in the calculation of scalar quantities such as RMS and Kurtosis, and the analysis of their evolution in time. While the scalar-based approach offers the advantage of maximum data reduction; it has the disadvantage that most part of the information contained in the raw AE signal is lost unrecoverably. This work presents a method offering large data reduction, while keeping the most important information conveyed by the raw AE signal, useful for failure detection and diagnosis. The proposed method consist in the construction of a synthetic, unevenly sampled signal which envelopes the AE bursts present on the raw AE signal in a triangular shape. The constructed signal - which we call TriSignal - also permits the estimation of most scalar quantities typically used for failure detection. But more importantly, it contains the information of the time of occurrence of the bursts, which is key for failure diagnosis. Lomb-Scargle normalized periodogram is used to construct the TriSignal spectrum, which reveals the frequency content of the TriSignal and provides the same information as the classic AE envelope. The paper includes application examples in planetary gearbox and low-speed rolling element bearing.
Filliter, Jillian H; Glover, Jacqueline M; McMullen, Patricia A; Salmon, Joshua P; Johnson, Shannon A
2016-03-01
Houses have often been used as comparison stimuli in face-processing studies because of the many attributes they share with faces (e.g., distinct members of a basic category, consistent internal features, mono-orientation, and relative familiarity). Despite this, no large, well-controlled databases of photographs of houses that have been developed for research use currently exist. To address this gap, we photographed 100 houses and carefully edited these images. We then asked 41 undergraduate students (18 to 31 years of age) to rate each house on three dimensions: typicality, likeability, and face-likeness. The ratings had a high degree of face validity, and analyses revealed a significant positive correlation between typicality and likeability. We anticipate that this stimulus set (i.e., the DalHouses) and the associated ratings will prove useful to face-processing researchers by minimizing the effort required to acquire stimuli and allowing for easier replication and extension of studies. The photographs of all 100 houses and their ratings data can be obtained at http://dx.doi.org/10.6084/m9.figshare.1279430.
Cuvo, A J; Lerch, L J; Leurquin, D A; Gaffaney, T J; Poppen, R L
1998-01-01
The present experiments examined the effect of work requirements in combination with reinforcement schedule on the choice behavior of adults with mental retardation and preschool children. The work requirements of age-appropriate tasks (i.e., sorting silverware, jumping hurdles, tossing beanbags) were manipulated. Participants were presented with their choice of two response options for each trial that varied simultaneously on both work requirement and reinforcement schedule. Results showed that when responding to both choices occurred on the same reinforcement schedule, participants allocated most of their responses to the option with the easier work requirement. When the response option requiring less work was on a leaner reinforcement schedule, most participants shifted their choice to exert more work. There were individual differences across participants regarding their pattern of responding and when they switched from the lesser to the greater work requirement. Data showed that participants' responding was largely controlled by the reinforcement received for responding to each level of work. Various conceptualizations regarding the effects of work requirements on choice behavior are discussed. PMID:9532750
Supramolecular gel electrophoresis of large DNA fragments.
Tazawa, Shohei; Kobayashi, Kazuhiro; Oyoshi, Takanori; Yamanaka, Masamichi
2017-10-01
Pulsed-field gel electrophoresis is a frequent technique used to separate exceptionally large DNA fragments. In a typical continuous field electrophoresis, it is challenging to separate DNA fragments larger than 20 kbp because they migrate at a comparable rate. To overcome this challenge, it is necessary to develop a novel matrix for the electrophoresis. Here, we describe the electrophoresis of large DNA fragments up to 166 kbp using a supramolecular gel matrix and a typical continuous field electrophoresis system. C 3 -symmetric tris-urea self-assembled into a supramolecular hydrogel in tris-boric acid-EDTA buffer, a typical buffer for DNA electrophoresis, and the supramolecular hydrogel was used as a matrix for electrophoresis to separate large DNA fragments. Three types of DNA marker, the λ-Hind III digest (2 to 23 kbp), Lambda DNA-Mono Cut Mix (10 to 49 kbp), and Marker 7 GT (10 to 165 kbp), were analyzed in this study. Large DNA fragments of greater than 100 kbp showed distinct mobility using a typical continuous field electrophoresis system. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Simulation of FRET dyes allows quantitative comparison against experimental data
NASA Astrophysics Data System (ADS)
Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander
2018-03-01
Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.
Preparing for in situ processing on upcoming leading-edge supercomputers
Kress, James; Churchill, Randy Michael; Klasky, Scott; ...
2016-10-01
High performance computing applications are producing increasingly large amounts of data and placing enormous stress on current capabilities for traditional post-hoc visualization techniques. Because of the growing compute and I/O imbalance, data reductions, including in situ visualization, are required. These reduced data are used for analysis and visualization in a variety of different ways. Many of he visualization and analysis requirements are known a priori, but when they are not, scientists are dependent on the reduced data to accurately represent the simulation in post hoc analysis. The contributions of this paper is a description of the directions we are pursuingmore » to assist a large scale fusion simulation code succeed on the next generation of supercomputers. Finally, these directions include the role of in situ processing for performing data reductions, as well as the tradeoffs between data size and data integrity within the context of complex operations in a typical scientific workflow.« less
SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop.
Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo
2014-01-01
Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig's scalability over many computing nodes and illustrate its use with example scripts. Available under the open source MIT license at http://sourceforge.net/projects/seqpig/
A fast image simulation algorithm for scanning transmission electron microscopy.
Ophus, Colin
2017-01-01
Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. We present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this method with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.
A fast image simulation algorithm for scanning transmission electron microscopy
Ophus, Colin
2017-05-10
Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. Here, we present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this methodmore » with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.« less
A review of high magnetic moment thin films for microscale and nanotechnology applications
Scheunert, Gunther; Heinonen, O.; Hardeman, R.; ...
2016-02-17
Here, the creation of large magnetic fields is a necessary component in many technologies, ranging from magnetic resonance imaging, electric motors and generators, and magnetic hard disk drives in information storage. This is typically done by inserting a ferromagnetic pole piece with a large magnetisation density M S in a solenoid. In addition to large M S, it is usually required or desired that the ferromagnet is magnetically soft and has a Curie temperature well above the operating temperature of the device. A variety of ferromagnetic materials are currently in use, ranging from FeCo alloys in, for example, hard diskmore » drives, to rare earth metals operating at cryogenic temperatures in superconducting solenoids. These latter can exceed the limit on M S for transition metal alloys given by the Slater-Pauling curve. This article reviews different materials and concepts in use or proposed for technological applications that require a large M S, with an emphasis on nanoscale material systems, such as thin and ultra-thin films. Attention is also paid to other requirements or properties, such as the Curie temperature and magnetic softness. In a final summary, we evaluate the actual applicability of the discussed materials for use as pole tips in electromagnets, in particular, in nanoscale magnetic hard disk drive read-write heads; the technological advancement of the latter has been a very strong driving force in the development of the field of nanomagnetism.« less
An organizational culture gap analysis in 6 New Zealand community pharmacies.
Scahill, Shane L; Carswell, Peter; Harrison, Jeff
2011-09-01
The barriers to moving forward and meeting the expectations of policy makers and professional pharmacy bodies appear to relate to the organizational culture of community pharmacy. Despite the importance of cultural change for business transformation, organizational culture has largely gone unnoticed in community pharmacy practice research. To perform an organizational culture gap analysis in 6 New Zealand community pharmacies. Mean scores from a cultural rating survey (n=47) were calculated for 8 cultural clusters and mapped onto a typical and a beneficial pattern match (ladder diagram) for each case site. These ladder diagrams provide an understanding of the gap between the 2 ratings based on the gradient of the lines joining cultural clusters-the rungs of the ladder. Software can be used to generate a Pearson correlation describing the strength of the relationship between the typical and beneficial ratings. Eight cultural clusters were mapped: "leadership and staff management"; "valuing each other and the team"; "free-thinking, fun and, open to challenge"; "trusted behavior"; "customer relations"; "focus on external integration"; "provision of systematic advice"; and the "embracing of innovation." Analysis suggested a high level of correlation between the means of the typical and beneficial ratings. Although the variance between average ratings might be quite small, the relative difference can still be meaningful to participants in the cultural setting. The diagrams suggest a requirement for external integration, the provision of systematic advice, and the embracing of innovation to become more typical in most pharmacies. Trusted behavior is the most typical and most beneficial cultural dimension in most pharmacies, whereas valuing each other and the team is the least beneficial. Gaps in organizational culture have been identified through the use of a rating survey. The dimensions of focus on external integration, providing systematic advice, and embracing innovation require further exploration through interviews in case site pharmacies. Copyright © 2011 Elsevier Inc. All rights reserved.
Scalable DB+IR Technology: Processing Probabilistic Datalog with HySpirit.
Frommholz, Ingo; Roelleke, Thomas
2016-01-01
Probabilistic Datalog (PDatalog, proposed in 1995) is a probabilistic variant of Datalog and a nice conceptual idea to model Information Retrieval in a logical, rule-based programming paradigm. Making PDatalog work in real-world applications requires more than probabilistic facts and rules, and the semantics associated with the evaluation of the programs. We report in this paper some of the key features of the HySpirit system required to scale the execution of PDatalog programs. Firstly, there is the requirement to express probability estimation in PDatalog. Secondly, fuzzy-like predicates are required to model vague predicates (e.g. vague match of attributes such as age or price). Thirdly, to handle large data sets there are scalability issues to be addressed, and therefore, HySpirit provides probabilistic relational indexes and parallel and distributed processing . The main contribution of this paper is a consolidated view on the methods of the HySpirit system to make PDatalog applicable in real-scale applications that involve a wide range of requirements typical for data (information) management and analysis.
Conception and characterization of a virtual coplanar grid for a 11×11 pixelated CZT detector
NASA Astrophysics Data System (ADS)
Espagnet, Romain; Frezza, Andrea; Martin, Jean-Pierre; Hamel, Louis-André; Després, Philippe
2017-07-01
Due to the low mobility of holes in CZT, commercially available detectors with a relatively large volume typically use a pixelated anode structure. They are mostly used in imaging applications and often require a dense electronic readout scheme. These large volume detectors are also interesting for high-sensitivity applications and a CZT-based blood gamma counter was developed from a 20×20×15 mm3 crystal available commercially and having a 11×11 pixelated readout scheme. A method is proposed here to reduce the number of channels required to use the crystal in a high-sensitivity counting application, dedicated to pharmacokinetic modelling in PET and SPECT. Inspired by a classic coplanar anode, an implementation of a virtual coplanar grid was done by connecting the 121 pixels of the detector to form intercalated bands. The layout, the front-end electronics and the characterization of the detector in this 2-channel anode geometry is presented. The coefficients required to compensate for electron trapping in CZT were determined experimentally to improve the performance. The resulting virtual coplanar detector has an intrinsic efficiency of 34% and an energy resolution of 8% at 662 keV. The detector's response was linear between 80 keV and 1372 keV. This suggests that large CZT crystals offer an excellent alternative to scintillation detectors for some applications, especially those where high-sensitivity and compactness are required.
[A large-scale accident in Alpine terrain].
Wildner, M; Paal, P
2015-02-01
Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.
Genomic Data Quality Impacts Automated Detection of Lateral Gene Transfer in Fungi
Dupont, Pierre-Yves; Cox, Murray P.
2017-01-01
Lateral gene transfer (LGT, also known as horizontal gene transfer), an atypical mechanism of transferring genes between species, has almost become the default explanation for genes that display an unexpected composition or phylogeny. Numerous methods of detecting LGT events all rely on two fundamental strategies: primary structure composition or gene tree/species tree comparisons. Discouragingly, the results of these different approaches rarely coincide. With the wealth of genome data now available, detection of laterally transferred genes is increasingly being attempted in large uncurated eukaryotic datasets. However, detection methods depend greatly on the quality of the underlying genomic data, which are typically complex for eukaryotes. Furthermore, given the automated nature of genomic data collection, it is typically impractical to manually verify all protein or gene models, orthology predictions, and multiple sequence alignments, requiring researchers to accept a substantial margin of error in their datasets. Using a test case comprising plant-associated genomes across the fungal kingdom, this study reveals that composition- and phylogeny-based methods have little statistical power to detect laterally transferred genes. In particular, phylogenetic methods reveal extreme levels of topological variation in fungal gene trees, the vast majority of which show departures from the canonical species tree. Therefore, it is inherently challenging to detect LGT events in typical eukaryotic genomes. This finding is in striking contrast to the large number of claims for laterally transferred genes in eukaryotic species that routinely appear in the literature, and questions how many of these proposed examples are statistically well supported. PMID:28235827
Design and fabrication of giant micromirrors using electroplating-based technology
NASA Astrophysics Data System (ADS)
Ilias, Samir; Topart, Patrice A.; Larouche, Carl; Leclair, Sebastien; Jerominek, Hubert
2005-01-01
Giant micromirrors with large scanning deflection and good flatness are required for many space and terrestrial applications. A novel approach to manufacturing this category of micromirrors is proposed. The approach combines selective electroplating and flip-chip based technologies. It allows for large air gaps, flat and smooth active micromirror surfaces and permits independent fabrication of the micromirrors and control electronics, avoiding temperature and sacrificial layer incompatibilities between them. In this work, electrostatically actuated piston and torsion micromirrors were designed and simulated. The simulated structures were designed to allow large deflection, i.e. piston displacement larger than 10 um and torsional deflection up to 35°. To achieve large micromirror deflections, up to seventy micron-thick resists were used as a micromold for nickel and solder electroplating. Smooth micromirror surfaces (roughness lower than 5 nm rms) and large radius of curvature (R as large as 23 cm for a typical 1000x1000 um2 micromirror fabricated without address circuits) were achieved. A detailed fabrication process is presented. First piston mirror prototypes were fabricated and a preliminary evaluation of static deflection of a piston mirror is presented.
2012-02-17
Space Shuttle Payloads: Kennedy Space Center was the hub for the final preparation and launch of the space shuttle and its payloads. The shuttle carried a wide variety of payloads into Earth orbit. Not all payloads were installed in the shuttle's cargo bay. In-cabin payloads were carried in the shuttle's middeck. Cargo bay payloads were typically large payloads which did not require a pressurized environment, such as interplanetary space probes, earth-orbiting satellites, scientific laboratories and International Space Station trusses and components. Poster designed by Kennedy Space Center Graphics Department/Greg Lee. Credit: NASA
Management of sizeable carotid body tumor: Case report and review of literature.
Elsharawy, Mohamed A; Alsaif, Hind; Elsaid, Aymen; Kredees, Ali
2013-10-01
Carotid body tumor is a paraganglioma derived from the neural crest. It arises from the carotid body which acts as a vascular chemoreceptors and is usually located at the carotid bifurcation. Sizeable (Shamblin III, >5 cm size) tumors are large and typically encase the carotid artery requiring vessel resection and replacement. Management of such tumors carries a high risk of postoperative mortality and morbidity rates specially with regards to neurovascular complications. We report a case of sizeable tumor which was surgically removed with minimal complications.
Management of sizeable carotid body tumor: Case report and review of literature
Elsharawy, Mohamed A; Alsaif, Hind; Elsaid, Aymen; Kredees, Ali
2013-01-01
Carotid body tumor is a paraganglioma derived from the neural crest. It arises from the carotid body which acts as a vascular chemoreceptors and is usually located at the carotid bifurcation. Sizeable (Shamblin III, >5 cm size) tumors are large and typically encase the carotid artery requiring vessel resection and replacement. Management of such tumors carries a high risk of postoperative mortality and morbidity rates specially with regards to neurovascular complications. We report a case of sizeable tumor which was surgically removed with minimal complications. PMID:24327970
Color object detection using spatial-color joint probability functions.
Luo, Jiebo; Crandall, David
2006-06-01
Object detection in unconstrained images is an important image understanding problem with many potential applications. There has been little success in creating a single algorithm that can detect arbitrary objects in unconstrained images; instead, algorithms typically must be customized for each specific object. Consequently, it typically requires a large number of exemplars (for rigid objects) or a large amount of human intuition (for nonrigid objects) to develop a robust algorithm. We present a robust algorithm designed to detect a class of compound color objects given a single model image. A compound color object is defined as having a set of multiple, particular colors arranged spatially in a particular way, including flags, logos, cartoon characters, people in uniforms, etc. Our approach is based on a particular type of spatial-color joint probability function called the color edge co-occurrence histogram. In addition, our algorithm employs perceptual color naming to handle color variation, and prescreening to limit the search scope (i.e., size and location) for the object. Experimental results demonstrated that the proposed algorithm is insensitive to object rotation, scaling, partial occlusion, and folding, outperforming a closely related algorithm based on color co-occurrence histograms by a decisive margin.
The components of working memory updating: an experimental decomposition and individual differences.
Ecker, Ullrich K H; Lewandowsky, Stephan; Oberauer, Klaus; Chee, Abby E H
2010-01-01
Working memory updating (WMU) has been identified as a cognitive function of prime importance for everyday tasks and has also been found to be a significant predictor of higher mental abilities. Yet, little is known about the constituent processes of WMU. We suggest that operations required in a typical WMU task can be decomposed into 3 major component processes: retrieval, transformation, and substitution. We report a large-scale experiment that instantiated all possible combinations of those 3 component processes. Results show that the 3 components make independent contributions to updating performance. We additionally present structural equation models that link WMU task performance and working memory capacity (WMC) measures. These feature the methodological advancement of estimating interindividual covariation and experimental effects on mean updating measures simultaneously. The modeling results imply that WMC is a strong predictor of WMU skills in general, although some component processes-in particular, substitution skills-were independent of WMC. Hence, the reported predictive power of WMU measures may rely largely on common WM functions also measured in typical WMC tasks, although substitution skills may make an independent contribution to predicting higher mental abilities. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Radiative PQ breaking and the Higgs boson mass
NASA Astrophysics Data System (ADS)
D'Eramo, Francesco; Hall, Lawrence J.; Pappadopulo, Duccio
2015-06-01
The small and negative value of the Standard Model Higgs quartic coupling at high scales can be understood in terms of anthropic selection on a landscape where large and negative values are favored: most universes have a very short-lived electroweak vacuum and typical observers are in universes close to the corresponding metastability boundary. We provide a simple example of such a landscape with a Peccei-Quinn symmetry breaking scale generated through dimensional transmutation and supersymmetry softly broken at an intermediate scale. Large and negative contributions to the Higgs quartic are typically generated on integrating out the saxion field. Cancellations among these contributions are forced by the anthropic requirement of a sufficiently long-lived electroweak vacuum, determining the multiverse distribution for the Higgs quartic in a similar way to that of the cosmological constant. This leads to a statistical prediction of the Higgs boson mass that, for a wide range of parameters, yields the observed value within the 1σ statistical uncertainty of ˜ 5 GeV originating from the multiverse distribution. The strong CP problem is solved and single-component axion dark matter is predicted, with an abundance that can be understood from environmental selection. A more general setting for the Higgs mass prediction is discussed.
The 200-kilowatt wind turbine project
NASA Technical Reports Server (NTRS)
1978-01-01
The three 200 kilowatt wind turbines described, compose the first of three separate systems. Proposed wind turbines of the two other systems, although similar in design, are larger in both physical size and rated power generation. The overall objective of the project is to obtain early operation and performance data while gaining initial experience in the operation of large, horizontal-axis wind turbines in typical utility environments. Several of the key issues addressed include the following: (1) impact of the variable power output (due to varying wind speeds) on the utility grid (2) compatibility with utility requirements (voltage and frequency control of generated power) (3) demonstration of unattended, fail-safe operation (4) reliability of the wind turbine system (5) required maintenance and (6) initial public reaction and acceptance.
Practical Considerations for Optimizing Position Sensitivity in Arrays of Position-sensitive TES's
NASA Technical Reports Server (NTRS)
Smith, Stephen J.; Bandler, Simon R.; Figueroa-Feliciano, Encetali; Iyomoto, Naoko; Kelley, Richard L.; Kilbourne, Caroline A.; Porder, Frederick S.; Sadleir, John E.
2007-01-01
We are developing Position-Sensitive Transitions-Edge Sensors (PoST's) for future X-ray astronomy missions such as NASA's Constellation-X. The PoST consists of one or more Transitions Edge Sensors (TES's) thermally connected to a large X-ray absorber, which through heat diffusion, gives rise to position dependence. The development of PoST's is motivated by the desire to achieve the largest the focal-plan coverage with the fewest number of readout channels. In order to develop a practical array, consisting of an inner pixellated core with an outer array of large absorber PoST's, we must be able to simultaneously read out all (-1800) channels in the array. This is achievable using time division multiplexing (TDM), but does set stringent slew rate requirements on the array. Typically, we must damp the pulses to reduce the slew rate of the input signal to the TDM. This is achieved by applying a low-pass analog filter with large inductance to the signal. This attenuates the high frequency components of the signal, essential for position discrimination in PoST's, relative to the white noise of the readout chain and degrades the position sensitivity. Using numerically simulated data, we investigate the position sensing ability of typical PoST designs under such high inductance conditions. We investigate signal-processing techniques for optimal determination of the event position and discuss the practical considerations for real-time implementation.
Yang, X I A; Meneveau, C
2017-04-13
In recent years, there has been growing interest in large-eddy simulation (LES) modelling of atmospheric boundary layers interacting with arrays of wind turbines on complex terrain. However, such terrain typically contains geometric features and roughness elements reaching down to small scales that typically cannot be resolved numerically. Thus subgrid-scale models for the unresolved features of the bottom roughness are needed for LES. Such knowledge is also required to model the effects of the ground surface 'underneath' a wind farm. Here we adapt a dynamic approach to determine subgrid-scale roughness parametrizations and apply it for the case of rough surfaces composed of cuboidal elements with broad size distributions, containing many scales. We first investigate the flow response to ground roughness of a few scales. LES with the dynamic roughness model which accounts for the drag of unresolved roughness is shown to provide resolution-independent results for the mean velocity distribution. Moreover, we develop an analytical roughness model that accounts for the sheltering effects of large-scale on small-scale roughness elements. Taking into account the shading effect, constraints from fundamental conservation laws, and assumptions of geometric self-similarity, the analytical roughness model is shown to provide analytical predictions that agree well with roughness parameters determined from LES.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).
Key optical components for spaceborne lasers
NASA Astrophysics Data System (ADS)
Löhring, J.; Winzen, M.; Faidel, H.; Miesner, J.; Plum, D.; Klein, J.; Fitzau, O.; Giesberts, M.; Brandenburg, W.; Seidel, A.; Schwanen, N.; Riesters, D.; Hengesbach, S.; Hoffmann, H.-D.
2016-03-01
Spaceborne lidar (light detection and ranging) systems have a large potential to become powerful instruments in the field of atmospheric research. Obviously, they have to be in operation for about three years without any maintenance like readjusting. Furthermore, they have to withstand strong temperature cycles typically in the range of -30 to +50 °C as well as mechanical shocks and vibrations, especially during launch. Additionally, the avoidance of any organic material inside the laser box is required, particularly in UV lasers. For atmospheric research pulses of about several 10 mJ at repetition rates of several 10 Hz are required in many cases. Those parameters are typically addressed by DPSSL that comprise components like: laser crystals, nonlinear crystals in pockels cells, faraday isolators and frequency converters, passive fibers, diode lasers and of course a lot of mirrors and lenses. In particular, some components have strong requirements regarding their tilt stability that is often in the 10 μrad range. In most of the cases components and packages that are used for industrial lasers do not fulfil all those requirements. Thus, the packaging of all these key components has been developed to meet those specifications only making use of metal and ceramics beside the optical component itself. All joints between the optical component and the laser baseplate are soldered or screwed. No clamps or adhesives are used. Most of the critical properties like tilting after temperature cycling have been proven in several tests. Currently, these components are used to build up first prototypes for spaceborne systems.
Methods and apparatus for transparent display using scattering nanoparticles
Hsu, Chia Wei; Qiu, Wenjun; Zhen, Bo; Shapira, Ofer; Soljacic, Marin
2017-06-14
Transparent displays enable many useful applications, including heads-up displays for cars and aircraft as well as displays on eyeglasses and glass windows. Unfortunately, transparent displays made of organic light-emitting diodes are typically expensive and opaque. Heads-up displays often require fixed light sources and have limited viewing angles. And transparent displays that use frequency conversion are typically energy inefficient. Conversely, the present transparent displays operate by scattering visible light from resonant nanoparticles with narrowband scattering cross sections and small absorption cross sections. More specifically, projecting an image onto a transparent screen doped with nanoparticles that selectively scatter light at the image wavelength(s) yields an image on the screen visible to an observer. Because the nanoparticles scatter light at only certain wavelengths, the screen is practically transparent under ambient light. Exemplary transparent scattering displays can be simple, inexpensive, scalable to large sizes, viewable over wide angular ranges, energy efficient, and transparent simultaneously.
Megawatt-Scale Application of Thermoelectric Devices in Thermal Power Plants
NASA Astrophysics Data System (ADS)
Knox, A. R.; Buckle, J.; Siviter, J.; Montecucco, A.; McCulloch, E.
2013-07-01
Despite the recent investment in renewable and sustainable energy sources, over 95% of the UK's electrical energy generation relies on the use of thermal power plants utilizing the Rankine cycle. Advanced supercritical Rankine cycle power plants typically have a steam temperature in excess of 600°C at a pressure of 290 bar and yet still have an overall efficiency below 50%, with much of this wasted energy being rejected to the environment through the condenser/cooling tower. This paper examines the opportunity for large-scale application of thermoelectric heat pumps to modify the Rankine cycle in such plants by preheating the boiler feedwater using energy recovered from the condenser system at a rate of approximately 1 MWth per °C temperature rise. A derivation of the improved process cycle efficiency and breakeven coefficient of performance required for economic operation is presented for a typical supercritical 600-MWe installation.
Freire-Picos, M A; Landeira-Ameijeiras, V; Mayán, María D
2013-07-01
The correct distribution of nuclear domains is critical for the maintenance of normal cellular processes such as transcription and replication, which are regulated depending on their location and surroundings. The most well-characterized nuclear domain, the nucleolus, is essential for cell survival and metabolism. Alterations in nucleolar structure affect nuclear dynamics; however, how the nucleolus and the rest of the nuclear domains are interconnected is largely unknown. In this report, we demonstrate that RNAP-II is vital for the maintenance of the typical crescent-shaped structure of the nucleolar rDNA repeats and rRNA transcription. When stalled RNAP-II molecules are not bound to the chromatin, the nucleolus loses its typical crescent-shaped structure. However, the RNAP-II interaction with Seh1p, or cryptic transcription by RNAP-II, is not critical for morphological changes. Copyright © 2013 John Wiley & Sons, Ltd.
Methods and apparatus for transparent display using scattering nanoparticles
Hsu, Chia Wei; Qiu, Wenjun; Zhen, Bo; Shapira, Ofer; Soljacic, Marin
2016-05-10
Transparent displays enable many useful applications, including heads-up displays for cars and aircraft as well as displays on eyeglasses and glass windows. Unfortunately, transparent displays made of organic light-emitting diodes are typically expensive and opaque. Heads-up displays often require fixed light sources and have limited viewing angles. And transparent displays that use frequency conversion are typically energy inefficient. Conversely, the present transparent displays operate by scattering visible light from resonant nanoparticles with narrowband scattering cross sections and small absorption cross sections. More specifically, projecting an image onto a transparent screen doped with nanoparticles that selectively scatter light at the image wavelength(s) yields an image on the screen visible to an observer. Because the nanoparticles scatter light at only certain wavelengths, the screen is practically transparent under ambient light. Exemplary transparent scattering displays can be simple, inexpensive, scalable to large sizes, viewable over wide angular ranges, energy efficient, and transparent simultaneously.
Radioisotope experiments in physics, chemistry, and biology. Second revised edition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dance, J.B.
It is stated that the main object of the book is to show that a large number of experiments in chemistry, physics and biology can be safely carried out with a minimal amount of equipment. No sophisticated counting equipment is required, in most cases simple geiger counters or photographic emulsions are used, but a few experiments are included for use with other forms of detectors, such as pulse electroscopes, which are often found in schools. Using naturally occurring compounds, sealed sources and some unsealed sources of low specific activity, experiments are given of typical applications in statistics, electronics, photography, healthmore » physics, botany and so on. The necessary theoretical background is presented in the introductory chapters and typical problems are given at the end of the book. The book is intended for GCE and Advanced level students. (UK)« less
Chen, Hong-Ming; Armstrong, Zachary; Hallam, Steven J; Withers, Stephen G
2016-02-08
Screening of large enzyme libraries such as those derived from metagenomic sources requires sensitive substrates. Fluorogenic glycosides typically offer the best sensitivity but typically must be used in a stopped format to generate good signal. Use of fluorescent phenols of pKa < 7, such as halogenated coumarins, allows direct screening at neutral pH. The synthesis and characterisation of a set of nine different glycosides of 6-chloro-4-methylumbelliferone are described. The use of these substrates in a pooled format for screening of expressed metagenomic libraries yielded a "hit rate" of 1 in 60. Hits were then readily deconvoluted with the individual substrates in a single plate to identify specific activities within each clone. The use of such a collection of substrates greatly accelerates the screening process. Copyright © 2015 Elsevier Ltd. All rights reserved.
Advantages of high-frequency Pulse-tube technology and its applications in infrared sensing
NASA Astrophysics Data System (ADS)
Arts, R.; Willems, D.; Mullié, J.; Benschop, T.
2016-05-01
The low-frequency pulse-tube cryocooler has been a workhorse for large heat lift applications. However, the highfrequency pulse tube has to date not seen the widespread use in tactical infrared applications that Stirling cryocoolers have had, despite significant advantages in terms of exported vibrations and lifetime. Thales Cryogenics has produced large series of high-frequency pulse-tube cryocoolers for non-infrared applications since 2005. However, the use of Thales pulse-tube cryocoolers for infrared sensing has to date largely been limited to high-end space applications. In this paper, the performances of existing available off-the-shelf pulse-tube cryocoolers are examined versus typical tactical infrared requirements. A comparison is made on efficiency, power density, reliability, and cost. An outlook is given on future developments that could bring the pulse-tube into the mainstream for tactical infrared applications.
HIGH-EFFICIENCY AUTONOMOUS LASER ADAPTIVE OPTICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baranec, Christoph; Riddle, Reed; Tendulkar, Shriharsh
2014-07-20
As new large-scale astronomical surveys greatly increase the number of objects targeted and discoveries made, the requirement for efficient follow-up observations is crucial. Adaptive optics imaging, which compensates for the image-blurring effects of Earth's turbulent atmosphere, is essential for these surveys, but the scarcity, complexity and high demand of current systems limit their availability for following up large numbers of targets. To address this need, we have engineered and implemented Robo-AO, a fully autonomous laser adaptive optics and imaging system that routinely images over 200 objects per night with an acuity 10 times sharper at visible wavelengths than typically possible frommore » the ground. By greatly improving the angular resolution, sensitivity, and efficiency of 1-3 m class telescopes, we have eliminated a major obstacle in the follow-up of the discoveries from current and future large astronomical surveys.« less
Wilderness-dependent wildlife: The large and carnivorous
Mattson, David J.
1997-01-01
Wilderness is vital to the conservation of wildlife species that are prone to conflict with humans and vulnerable to human-caused mortality. These species tend to be large and are often carnivorous. Such animals are typically problematic for humans because they kill livestock and, occasionally, humans, and cause inordinate damage to crops. The vulnerability of large herbivores and carnivores to humans is exacerbated by vigorous markets for wild meat and other body parts, widespread human poverty, and human societies prone to the breakdown of civil order. The survival of wilderness-dependent wildlife is thus not only linked to the preservation of extensive wilderness but is also affected by the health of human societies. Because overt intervention has limited uses in the preservation of wilderness-dependent wildlife, these animals pose a special problem for humanity. Their survival requires that we forgo domination of a substantial portion of the remaining wildlands on Earth.
The island rule: made to be broken?
Meiri, Shai; Cooper, Natalie; Purvis, Andy
2007-01-01
The island rule is a hypothesis whereby small mammals evolve larger size on islands while large insular mammals dwarf. The rule is believed to emanate from small mammals growing larger to control more resources and enhance metabolic efficiency, while large mammals evolve smaller size to reduce resource requirements and increase reproductive output. We show that there is no evidence for the existence of the island rule when phylogenetic comparative methods are applied to a large, high-quality dataset. Rather, there are just a few clade-specific patterns: carnivores; heteromyid rodents; and artiodactyls typically evolve smaller size on islands whereas murid rodents usually grow larger. The island rule is probably an artefact of comparing distantly related groups showing clade-specific responses to insularity. Instead of a rule, size evolution on islands is likely to be governed by the biotic and abiotic characteristics of different islands, the biology of the species in question and contingency. PMID:17986433
Design of Phase II Non-inferiority Trials.
Jung, Sin-Ho
2017-09-01
With the development of inexpensive treatment regimens and less invasive surgical procedures, we are confronted with non-inferiority study objectives. A non-inferiority phase III trial requires a roughly four times larger sample size than that of a similar standard superiority trial. Because of the large required sample size, we often face feasibility issues to open a non-inferiority trial. Furthermore, due to lack of phase II non-inferiority trial design methods, we do not have an opportunity to investigate the efficacy of the experimental therapy through a phase II trial. As a result, we often fail to open a non-inferiority phase III trial and a large number of non-inferiority clinical questions still remain unanswered. In this paper, we want to develop some designs for non-inferiority randomized phase II trials with feasible sample sizes. At first, we review a design method for non-inferiority phase III trials. Subsequently, we propose three different designs for non-inferiority phase II trials that can be used under different settings. Each method is demonstrated with examples. Each of the proposed design methods is shown to require a reasonable sample size for non-inferiority phase II trials. The three different non-inferiority phase II trial designs are used under different settings, but require similar sample sizes that are typical for phase II trials.
EvArnoldi: A New Algorithm for Large-Scale Eigenvalue Problems.
Tal-Ezer, Hillel
2016-05-19
Eigenvalues and eigenvectors are an essential theme in numerical linear algebra. Their study is mainly motivated by their high importance in a wide range of applications. Knowledge of eigenvalues is essential in quantum molecular science. Solutions of the Schrödinger equation for the electrons composing the molecule are the basis of electronic structure theory. Electronic eigenvalues compose the potential energy surfaces for nuclear motion. The eigenvectors allow calculation of diople transition matrix elements, the core of spectroscopy. The vibrational dynamics molecule also requires knowledge of the eigenvalues of the vibrational Hamiltonian. Typically in these problems, the dimension of Hilbert space is huge. Practically, only a small subset of eigenvalues is required. In this paper, we present a highly efficient algorithm, named EvArnoldi, for solving the large-scale eigenvalues problem. The algorithm, in its basic formulation, is mathematically equivalent to ARPACK ( Sorensen , D. C. Implicitly Restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculations ; Springer , 1997 ; Lehoucq , R. B. ; Sorensen , D. C. SIAM Journal on Matrix Analysis and Applications 1996 , 17 , 789 ; Calvetti , D. ; Reichel , L. ; Sorensen , D. C. Electronic Transactions on Numerical Analysis 1994 , 2 , 21 ) (or Eigs of Matlab) but significantly simpler.
A Fast Evaluation Method for Energy Building Consumption Based on the Design of Experiments
NASA Astrophysics Data System (ADS)
Belahya, Hocine; Boubekri, Abdelghani; Kriker, Abdelouahed
2017-08-01
Building sector is one of the effective consumer energy by 42% in Algeria. The need for energy has continued to grow, in inordinate way, due to lack of legislation on energy performance in this large consumer sector. Another reason is the simultaneous change of users’ requirements to maintain their comfort, especially summer in dry lands and parts of southern Algeria, where the town of Ouargla presents a typical example which leads to a large amount of electricity consumption through the use of air conditioning. In order to achieve a high performance envelope of the building, an optimization of major parameters building envelope is required, using design of experiments (DOE), can determine the most effective parameters and eliminate the less importance. The study building is often complex and time consuming due to the large number of parameters to consider. This study focuses on reducing the computing time and determines the major parameters of building energy consumption, such as area of building, factor shape, orientation, ration walls to windows …etc to make some proposal models in order to minimize the seasonal energy consumption due to air conditioning needs.
NASA Technical Reports Server (NTRS)
Rybczynski, Fred
1993-01-01
A major challenge facing data processing centers today is data management. This includes the storage of large volumes of data and access to it. Current media storage for large data volumes is typically off line and frequently off site in warehouses. Access to data archived in this fashion can be subject to long delays, errors in media selection and retrieval, and even loss of data through misplacement or damage to the media. Similarly, designers responsible for architecting systems capable of continuous high-speed recording of large volumes of digital data are faced with the challenge of identifying technologies and configurations that meet their requirements. Past approaches have tended to evaluate the combination of the fastest tape recorders with the highest capacity tape media and then to compromise technology selection as a consequence of cost. This paper discusses an architecture that addresses both of these challenges and proposes a cost effective solution based on robots, high speed helical scan tape drives, and large-capacity media.
NASA Technical Reports Server (NTRS)
Gabriel, Philip M.; Yeh, Penshu; Tsay, Si-Chee
2013-01-01
This paper presents results and analyses of applying an international space data compression standard to weather radar measurements that can easily span 8 orders of magnitude and typically require a large storage capacity as well as significant bandwidth for transmission. By varying the degree of the data compression, we analyzed the non-linear response of models that relate measured radar reflectivity and/or Doppler spectra to the moments and properties of the particle size distribution characterizing clouds and precipitation. Preliminary results for the meteorologically important phenomena of clouds and light rain indicate that for a 0.5 dB calibration uncertainty, typical for the ground-based pulsed-Doppler 94 GHz (or 3.2 mm, W-band) weather radar used as a proxy for spaceborne radar in this study, a lossless compression ratio of only 1.2 is achievable. However, further analyses of the non-linear response of various models of rainfall rate, liquid water content and median volume diameter show that a lossy data compression ratio exceeding 15 is realizable. The exploratory analyses presented are relevant to future satellite missions, where the transmission bandwidth is premium and storage requirements of vast volumes of data, potentially problematic.
Computational efficiency improvements for image colorization
NASA Astrophysics Data System (ADS)
Yu, Chao; Sharma, Gaurav; Aly, Hussein
2013-03-01
We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.
Watermarking techniques for electronic delivery of remote sensing images
NASA Astrophysics Data System (ADS)
Barni, Mauro; Bartolini, Franco; Magli, Enrico; Olmo, Gabriella
2002-09-01
Earth observation missions have recently attracted a growing interest, mainly due to the large number of possible applications capable of exploiting remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products. Such a need is a very crucial one, because the Internet and other public/private networks have become preferred means of data exchange. A critical issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: assessment of the requirements imposed by remote sensing applications on watermark-based copyright protection, and modification of two well-established digital watermarking techniques to meet such constraints. More specifically, the concept of near-lossless watermarking is introduced and two possible algorithms matching such a requirement are presented. Experimental results are shown to measure the impact of watermark introduction on a typical remote sensing application, i.e., unsupervised image classification.
Comparative analysis on flexibility requirements of typical Cryogenic Transfer lines
NASA Astrophysics Data System (ADS)
Jadon, Mohit; Kumar, Uday; Choukekar, Ketan; Shah, Nitin; Sarkar, Biswanath
2017-04-01
The cryogenic systems and their applications; primarily in large Fusion devices, utilize multiple cryogen transfer lines of various sizes and complexities to transfer cryogenic fluids from plant to the various user/ applications. These transfer lines are composed of various critical sections i.e. tee section, elbows, flexible components etc. The mechanical sustainability (under failure circumstances) of these transfer lines are primary requirement for safe operation of the system and applications. The transfer lines need to be designed for multiple design constraints conditions like line layout, support locations and space restrictions. The transfer lines are subjected to single load and multiple load combinations, such as operational loads, seismic loads, leak in insulation vacuum loads etc. [1]. The analytical calculations and flexibility analysis using professional software are performed for the typical transfer lines without any flexible component, the results were analysed for functional and mechanical load conditions. The failure modes were identified along the critical sections. The same transfer line was then refurbished with the flexible components and analysed for failure modes. The flexible components provide additional flexibility to the transfer line system and make it safe. The results obtained from the analytical calculations were compared with those obtained from the flexibility analysis software calculations. The optimization of the flexible component’s size and selection was performed and components were selected to meet the design requirements as per code.
Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-03-01
To accomplish Federal goals for renewable energy, sustainability, and energy security, large-scale renewable energy projects must be developed and constructed on Federal sites at a significant scale with significant private investment. For the purposes of this Guide, large-scale Federal renewable energy projects are defined as renewable energy facilities larger than 10 megawatts (MW) that are sited on Federal property and lands and typically financed and owned by third parties.1 The U.S. Department of Energy’s Federal Energy Management Program (FEMP) helps Federal agencies meet these goals and assists agency personnel navigate the complexities of developing such projects and attract the necessarymore » private capital to complete them. This Guide is intended to provide a general resource that will begin to develop the Federal employee’s awareness and understanding of the project developer’s operating environment and the private sector’s awareness and understanding of the Federal environment. Because the vast majority of the investment that is required to meet the goals for large-scale renewable energy projects will come from the private sector, this Guide has been organized to match Federal processes with typical phases of commercial project development. FEMP collaborated with the National Renewable Energy Laboratory (NREL) and professional project developers on this Guide to ensure that Federal projects have key elements recognizable to private sector developers and investors. The main purpose of this Guide is to provide a project development framework to allow the Federal Government, private developers, and investors to work in a coordinated fashion on large-scale renewable energy projects. The framework includes key elements that describe a successful, financially attractive large-scale renewable energy project. This framework begins the translation between the Federal and private sector operating environments. When viewing the overall« less
Sekine, Masashi; Kita, Kahori; Yu, Wenwei
2015-01-01
Unlike forearm amputees, transhumeral amputees have residual stumps that are too small to provide a sufficient range of operation for their prosthetic parts to perform usual activities of daily living. Furthermore, it is difficult for small residual stumps to provide sufficient impact absorption for safe manipulation in daily living, as intact arms do. Therefore, substitution of upper limb function in transhumeral amputees requires a sufficient range of motion and sufficient viscoelasticity for shoulder prostheses under critical weight and dimension constraints. We propose the use of two different types of actuators, ie, pneumatic elastic actuators (PEAs) and servo motors. PEAs offer high power-to-weight performance and have intrinsic viscoelasticity in comparison with motors or standard industrial pneumatic cylinder actuators. However, the usefulness of PEAs in large working spaces is limited because of their short strokes. Servo motors, in contrast, can be used to achieve large ranges of motion. In this study, the relationship between the force and stroke of PEAs was investigated. The impact absorption of both types of actuators was measured using a single degree-of-freedom prototype to evaluate actuator compliance for safety purposes. Based on the fundamental properties of the actuators identified, a four degree-of-freedom robotic arm is proposed for prosthetic use. The configuration of the actuators and functional parts was designed to achieve a specified range of motion and torque calculated from the results of a simulation of typical movements performed in usual activities of daily living. Our experimental results showed that the requirements for the shoulder prostheses could be satisfied.
A study of facilities and fixtures for testing of a high speed civil transport wing component
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Vause, R. F.; Bowman, L. M.; Jensen, J. K.; Martin, C. J., Jr.; Stockwell, A. E.; Waters, W. A., Jr.
1996-01-01
A study was performed to determine the feasibility of testing a large-scale High Speed Civil Transport wing component in the Structures and Materials Testing Laboratory in Building 1148 at NASA Langley Research Center. The report includes a survey of the electrical and hydraulic resources and identifies the backing structure and floor hard points which would be available for reacting the test loads. The backing structure analysis uses a new finite element model of the floor and backstop support system in the Structures Laboratory. Information on the data acquisition system and the thermal power requirements is also presented. The study identified the hardware that would be required to test a typical component, including the number and arrangement of hydraulic actuators required to simulate expected flight loads. Load introduction and reaction structure concepts were analyzed to investigate the effects of experimentally induced boundary conditions.
The BioLexicon: a large-scale terminological resource for biomedical text mining
2011-01-01
Background Due to the rapidly expanding body of biomedical literature, biologists require increasingly sophisticated and efficient systems to help them to search for relevant information. Such systems should account for the multiple written variants used to represent biomedical concepts, and allow the user to search for specific pieces of knowledge (or events) involving these concepts, e.g., protein-protein interactions. Such functionality requires access to detailed information about words used in the biomedical literature. Existing databases and ontologies often have a specific focus and are oriented towards human use. Consequently, biological knowledge is dispersed amongst many resources, which often do not attempt to account for the large and frequently changing set of variants that appear in the literature. Additionally, such resources typically do not provide information about how terms relate to each other in texts to describe events. Results This article provides an overview of the design, construction and evaluation of a large-scale lexical and conceptual resource for the biomedical domain, the BioLexicon. The resource can be exploited by text mining tools at several levels, e.g., part-of-speech tagging, recognition of biomedical entities, and the extraction of events in which they are involved. As such, the BioLexicon must account for real usage of words in biomedical texts. In particular, the BioLexicon gathers together different types of terms from several existing data resources into a single, unified repository, and augments them with new term variants automatically extracted from biomedical literature. Extraction of events is facilitated through the inclusion of biologically pertinent verbs (around which events are typically organized) together with information about typical patterns of grammatical and semantic behaviour, which are acquired from domain-specific texts. In order to foster interoperability, the BioLexicon is modelled using the Lexical Markup Framework, an ISO standard. Conclusions The BioLexicon contains over 2.2 M lexical entries and over 1.8 M terminological variants, as well as over 3.3 M semantic relations, including over 2 M synonymy relations. Its exploitation can benefit both application developers and users. We demonstrate some such benefits by describing integration of the resource into a number of different tools, and evaluating improvements in performance that this can bring. PMID:21992002
The BioLexicon: a large-scale terminological resource for biomedical text mining.
Thompson, Paul; McNaught, John; Montemagni, Simonetta; Calzolari, Nicoletta; del Gratta, Riccardo; Lee, Vivian; Marchi, Simone; Monachini, Monica; Pezik, Piotr; Quochi, Valeria; Rupp, C J; Sasaki, Yutaka; Venturi, Giulia; Rebholz-Schuhmann, Dietrich; Ananiadou, Sophia
2011-10-12
Due to the rapidly expanding body of biomedical literature, biologists require increasingly sophisticated and efficient systems to help them to search for relevant information. Such systems should account for the multiple written variants used to represent biomedical concepts, and allow the user to search for specific pieces of knowledge (or events) involving these concepts, e.g., protein-protein interactions. Such functionality requires access to detailed information about words used in the biomedical literature. Existing databases and ontologies often have a specific focus and are oriented towards human use. Consequently, biological knowledge is dispersed amongst many resources, which often do not attempt to account for the large and frequently changing set of variants that appear in the literature. Additionally, such resources typically do not provide information about how terms relate to each other in texts to describe events. This article provides an overview of the design, construction and evaluation of a large-scale lexical and conceptual resource for the biomedical domain, the BioLexicon. The resource can be exploited by text mining tools at several levels, e.g., part-of-speech tagging, recognition of biomedical entities, and the extraction of events in which they are involved. As such, the BioLexicon must account for real usage of words in biomedical texts. In particular, the BioLexicon gathers together different types of terms from several existing data resources into a single, unified repository, and augments them with new term variants automatically extracted from biomedical literature. Extraction of events is facilitated through the inclusion of biologically pertinent verbs (around which events are typically organized) together with information about typical patterns of grammatical and semantic behaviour, which are acquired from domain-specific texts. In order to foster interoperability, the BioLexicon is modelled using the Lexical Markup Framework, an ISO standard. The BioLexicon contains over 2.2 M lexical entries and over 1.8 M terminological variants, as well as over 3.3 M semantic relations, including over 2 M synonymy relations. Its exploitation can benefit both application developers and users. We demonstrate some such benefits by describing integration of the resource into a number of different tools, and evaluating improvements in performance that this can bring.
Luman, Marjolein; Sergeant, Joseph A; Knol, Dirk L; Oosterlaan, Jaap
2010-08-15
When making decisions, children with oppositional defiant disorder (ODD) are thought to focus on reward and ignore penalty. This is suggested to be associated with a state of low psychophysiological arousal. This study investigates decision making in 18 children with oppositional defiant disorder and 24 typically developing control subjects. Children were required to choose between three alternatives that carried either frequent small rewards and occasional small penalties (advantageous), frequent large rewards and increasing penalties (seductive), or frequent small rewards and increasing penalties (disadvantageous). Penalties in the seductive and disadvantageous alternatives increased either in frequency or magnitude in two conditions. Heart rate (HR) and skin conductance responses to reinforcement were obtained. In the magnitude condition, children with ODD showed an increased preference for the seductive alternative (carrying large rewards); this was not observed in the frequency condition. Children with ODD, compared with typically developing children, displayed greater HR reactivity to reward (more HR deceleration) and smaller HR reactivity to penalty. Correlation analyses showed that decreased HR responses to penalty were related to an increased preference for large rewards. No group differences were observed in skin conductance responses to reward or penalty. The findings suggest that an increased preference for large rewards in children with ODD is related to a reduced cardiac reactivity to aversive stimuli. This confirms notions of impaired decision making and altered reinforcement sensitivity in children with ODD and adds to the literature linking altered autonomic control to antisocial behavior. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
A preprocessing strategy for helioseismic inversions
NASA Astrophysics Data System (ADS)
Christensen-Dalsgaard, J.; Thompson, M. J.
1993-05-01
Helioseismic inversion in general involves considerable computational expense, due to the large number of modes that is typically considered. This is true in particular of the widely used optimally localized averages (OLA) inversion methods, which require the inversion of one or more matrices whose order is the number of modes in the set. However, the number of practically independent pieces of information that a large helioseismic mode set contains is very much less than the number of modes, suggesting that the set might first be reduced before the expensive inversion is performed. We demonstrate with a model problem that by first performing a singular value decomposition the original problem may be transformed into a much smaller one, reducing considerably the cost of the OLA inversion and with no significant loss of information.
Internet based ECG medical information system.
James, D A; Rowlands, D; Mahnovetski, R; Channells, J; Cutmore, T
2003-03-01
Physiological monitoring of humans for medical applications is well established and ready to be adapted to the Internet. This paper describes the implementation of a Medical Information System (MIS-ECG system) incorporating an Internet based ECG acquisition device. Traditionally clinical monitoring of ECG is largely a labour intensive process with data being typically stored on paper. Until recently, ECG monitoring applications have also been constrained somewhat by the size of the equipment required. Today's technology enables large and fixed hospital monitoring systems to be replaced by small portable devices. With an increasing emphasis on health management a truly integrated information system for the acquisition, analysis, patient particulars and archiving is now a realistic possibility. This paper describes recent Internet and technological advances and presents the design and testing of the MIS-ECG system that utilises those advances.
Position measurement of the direct drive motor of Large Aperture Telescope
NASA Astrophysics Data System (ADS)
Li, Ying; Wang, Daxing
2010-07-01
Along with the development of space and astronomy science, production of large aperture telescope and super large aperture telescope will definitely become the trend. It's one of methods to solve precise drive of large aperture telescope using direct drive technology unified designed of electricity and magnetism structure. A direct drive precise rotary table with diameter of 2.5 meters researched and produced by us is a typical mechanical & electrical integration design. This paper mainly introduces position measurement control system of direct drive motor. In design of this motor, position measurement control system requires having high resolution, and precisely aligning the position of rotor shaft and making measurement, meanwhile transferring position information to position reversing information corresponding to needed motor pole number. This system has chosen high precision metal band coder and absolute type coder, processing information of coders, and has sent 32-bit RISC CPU making software processing, and gained high resolution composite coder. The paper gives relevant laboratory test results at the end, indicating the position measurement can apply to large aperture telescope control system. This project is subsidized by Chinese National Natural Science Funds (10833004).
Responses of large mammals to climate change.
Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan
2014-01-01
Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change.
Responses of large mammals to climate change
Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan
2014-01-01
Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change. PMID:27583293
Rights of Conscience Protections for Armed Forces Service Members and Their Chaplains
2015-07-22
established five categories of religious accommodation requests: dietary, grooming, medical , uniform, and worship practices.2 • Dietary: typically, these... Medical : typically, these are requests for a waiver of mandatory immunizations. • Uniform: typically, these are requests to wear religious jewelry or...service members in their units. Requirements A chaplain applicant is required to meet DoD medical and physical standards for commissioning as an
Preliminary Assessment of Microwave Readout Multiplexing Factor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croce, Mark Philip; Koehler, Katrina Elizabeth; Rabin, Michael W.
2017-01-23
Ultra-high resolution microcalorimeter gamma spectroscopy is a new non-destructive assay technology for measurement of plutonium isotopic composition, with the potential to reduce total measurement uncertainty to a level competitive with destructive analysis methods [1-4]. Achieving this level of performance in practical applications requires not only the energy resolution now routinely achieved with transition-edge sensor microcalorimeter arrays (an order of magnitude better than for germanium detectors) but also high throughput. Microcalorimeter gamma spectrometers have not yet achieved detection efficiency and count rate capability that is comparable to germanium detectors, largely because of limits from existing readout technology. Microcalorimeter detectors must bemore » operated at low temperature to achieve their exceptional energy resolution. Although the typical 100 mK operating temperatures can be achieved with reliable, cryogen-free systems, the cryogenic complexity and heat load from individual readout channels for large sensor arrays is prohibitive. Multiplexing is required for practical systems. The most mature multiplexing technology at present is time-division multiplexing (TDM) [3, 5-6]. In TDM, the sensor outputs are switched by applying bias current to one SQUID amplifier at a time. Transition-edge sensor (TES) microcalorimeter arrays as large as 256 pixels have been developed for X-ray and gamma-ray spectroscopy using TDM technology. Due to bandwidth limits and noise scaling, TDM is limited to a maximum multiplexing factor of approximately 32-40 sensors on one readout line [8]. Increasing the size of microcalorimeter arrays above the kilopixel scale, required to match the throughput of germanium detectors, requires the development of a new readout technology with a much higher multiplexing factor.« less
In Pursuit of Neurophenotypes: The Consequences of Having Autism and a Big Brain
Amaral, David G.; Li, Deana; Libero, Lauren; Solomon, Marjorie; Van de Water, Judy; Mastergeorge, Ann; Naigles, Letitia; Rogers, Sally; Nordahl, Christine Wu
2017-01-01
A consensus has emerged that despite common core features, autism spectrum disorder (ASD) has multiple etiologies and various genetic and biological characteristics. The fact that there are likely to be subtypes of ASD has complicated attempts to develop effective therapies. The UC Davis MIND Institute Autism Phenome Project is a longitudinal, multidisciplinary analysis of children with autism and age-matched typically developing controls; nearly 400 families are participating in this study. The overarching goal is to gather sufficient biological, medical, and behavioral data to allow definition of clinically meaningful subtypes of ASD. One reasonable hypothesis is that different subtypes of autism will demonstrate different patterns of altered brain organization or development i.e., different neurophenotypes. In this Commentary, we discuss one neurophenotype that is defined by megalencephaly, or having brain size that is large and disproportionate to body size. We have found that 15% of the boys with autism demonstrate this neurophenotype, though it is far less common in girls. We review behavioral and medical characteristics of the large-brained group of boys with autism in comparison to those with typically sized brains. While brain size in typically developing individuals is positively correlated with cognitive function, the children with autism and larger brains have more severe disabilities and poorer prognosis. This research indicates that phenotyping in autism, like genotyping, requires a very substantial cohort of subjects. Moreover, since brain and behavior relationships may emerge at different times during development, this effort highlights the need for longitudinal analyses to carry out meaningful phenotyping. PMID:28239961
Energy Efficient IoT Data Collection in Smart Cities Exploiting D2D Communications.
Orsino, Antonino; Araniti, Giuseppe; Militano, Leonardo; Alonso-Zarate, Jesus; Molinaro, Antonella; Iera, Antonio
2016-06-08
Fifth Generation (5G) wireless systems are expected to connect an avalanche of "smart" objects disseminated from the largest "Smart City" to the smallest "Smart Home". In this vision, Long Term Evolution-Advanced (LTE-A) is deemed to play a fundamental role in the Internet of Things (IoT) arena providing a large coherent infrastructure and a wide wireless connectivity to the devices. However, since LTE-A was originally designed to support high data rates and large data size, novel solutions are required to enable an efficient use of radio resources to convey small data packets typically exchanged by IoT applications in "smart" environments. On the other hand, the typically high energy consumption required by cellular communications is a serious obstacle to large scale IoT deployments under cellular connectivity as in the case of Smart City scenarios. Network-assisted Device-to-Device (D2D) communications are considered as a viable solution to reduce the energy consumption for the devices. The particular approach presented in this paper consists in appointing one of the IoT smart devices as a collector of all data from a cluster of objects using D2D links, thus acting as an aggregator toward the eNodeB. By smartly adapting the Modulation and Coding Scheme (MCS) on the communication links, we will show it is possible to maximize the radio resource utilization as a function of the total amount of data to be sent. A further benefit that we will highlight is the possibility to reduce the transmission power when a more robust MCS is adopted. A comprehensive performance evaluation in a wide set of scenarios will testify the achievable gains in terms of energy efficiency and resource utilization in the envisaged D2D-based IoT data collection.
Energy Efficient IoT Data Collection in Smart Cities Exploiting D2D Communications
Orsino, Antonino; Araniti, Giuseppe; Militano, Leonardo; Alonso-Zarate, Jesus; Molinaro, Antonella; Iera, Antonio
2016-01-01
Fifth Generation (5G) wireless systems are expected to connect an avalanche of “smart” objects disseminated from the largest “Smart City” to the smallest “Smart Home”. In this vision, Long Term Evolution-Advanced (LTE-A) is deemed to play a fundamental role in the Internet of Things (IoT) arena providing a large coherent infrastructure and a wide wireless connectivity to the devices. However, since LTE-A was originally designed to support high data rates and large data size, novel solutions are required to enable an efficient use of radio resources to convey small data packets typically exchanged by IoT applications in “smart” environments. On the other hand, the typically high energy consumption required by cellular communications is a serious obstacle to large scale IoT deployments under cellular connectivity as in the case of Smart City scenarios. Network-assisted Device-to-Device (D2D) communications are considered as a viable solution to reduce the energy consumption for the devices. The particular approach presented in this paper consists in appointing one of the IoT smart devices as a collector of all data from a cluster of objects using D2D links, thus acting as an aggregator toward the eNodeB. By smartly adapting the Modulation and Coding Scheme (MCS) on the communication links, we will show it is possible to maximize the radio resource utilization as a function of the total amount of data to be sent. A further benefit that we will highlight is the possibility to reduce the transmission power when a more robust MCS is adopted. A comprehensive performance evaluation in a wide set of scenarios will testify the achievable gains in terms of energy efficiency and resource utilization in the envisaged D2D-based IoT data collection. PMID:27338385
NASA Astrophysics Data System (ADS)
Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris
2015-04-01
Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial decrease of the required number of function evaluations for detecting the optimal management policy, using an innovative, surrogate-assisted global optimization approach.
Conceptual Design of the TPF-O SC Buses
NASA Technical Reports Server (NTRS)
Purves, Lloyd R.
2007-01-01
The Terrestrial Planet Finder - Occulter (TPF-O) mission has two Spacecraft (SC) buses, one for a space telescope and the other for a formation-flying occulter. SC buses typically supply the utilities (support structures, propulsion, attitude control, power, communications, etc) required by the payloads. Unique requirements for the occulter SC bus are to provide the large delta V required for the slewing maneuvers of the occulter, and comunications for formation flying. The TPF-O telescope SC bus shares some key features of the one for the Hubble Space Telescope (HST): both support space telescopes designed to observe in the visible to near infrared range of wavelengths with comparable primary mirror apertures (2.4 m for HST, 2.4 - 4.0 m for TPF-O). However, TPF-O is expected to have a Wide Field Camera (WFC) with a Field of View (FOV) much larger than that of HST. Ths WFC is also expected to provide fine guidance. TPF-O is designed to operate in an orbit around the Sun-Earth Lagrange 2 (SEL2) point. The longer communications range to SEL2 and the large science FOV require higher performance communications than HST. Maintaining a SEL2 orbit requires TPF-O, unlike HST, to have a propulsion system. The velocity required for reachng SEL2 and the limited capabilities of affordable launch vehicles require both TPF-O elements to have compact, low-mass designs. Finally, it is possible that TPF-O may utilize a modular design derived fiom that of HST to allow servicing in the SEL2 orbit.
Spacelab mission dependent training parametric resource requirements study
NASA Technical Reports Server (NTRS)
Ogden, D. H.; Watters, H.; Steadman, J.; Conrad, L.
1976-01-01
Training flows were developed for typical missions, resource relationships analyzed, and scheduling optimization algorithms defined. Parametric analyses were performed to study the effect of potential changes in mission model, mission complexity and training time required on the resource quantities required to support training of payload or mission specialists. Typical results of these analyses are presented both in graphic and tabular form.
Automation of large scale transient protein expression in mammalian cells
Zhao, Yuguang; Bishop, Benjamin; Clay, Jordan E.; Lu, Weixian; Jones, Margaret; Daenke, Susan; Siebold, Christian; Stuart, David I.; Yvonne Jones, E.; Radu Aricescu, A.
2011-01-01
Traditional mammalian expression systems rely on the time-consuming generation of stable cell lines; this is difficult to accommodate within a modern structural biology pipeline. Transient transfections are a fast, cost-effective solution, but require skilled cell culture scientists, making man-power a limiting factor in a setting where numerous samples are processed in parallel. Here we report a strategy employing a customised CompacT SelecT cell culture robot allowing the large-scale expression of multiple protein constructs in a transient format. Successful protocols have been designed for automated transient transfection of human embryonic kidney (HEK) 293T and 293S GnTI− cells in various flask formats. Protein yields obtained by this method were similar to those produced manually, with the added benefit of reproducibility, regardless of user. Automation of cell maintenance and transient transfection allows the expression of high quality recombinant protein in a completely sterile environment with limited support from a cell culture scientist. The reduction in human input has the added benefit of enabling continuous cell maintenance and protein production, features of particular importance to structural biology laboratories, which typically use large quantities of pure recombinant proteins, and often require rapid characterisation of a series of modified constructs. This automated method for large scale transient transfection is now offered as a Europe-wide service via the P-cube initiative. PMID:21571074
Developing eThread pipeline using SAGA-pilot abstraction for large-scale structural bioinformatics.
Ragothaman, Anjani; Boddu, Sairam Chowdary; Kim, Nayong; Feinstein, Wei; Brylinski, Michal; Jha, Shantenu; Kim, Joohyun
2014-01-01
While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread--a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure.
Developing eThread Pipeline Using SAGA-Pilot Abstraction for Large-Scale Structural Bioinformatics
Ragothaman, Anjani; Feinstein, Wei; Jha, Shantenu; Kim, Joohyun
2014-01-01
While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread—a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure. PMID:24995285
Rio: a dynamic self-healing services architecture using Jini networking technology
NASA Astrophysics Data System (ADS)
Clarke, James B.
2002-06-01
Current mainstream distributed Java architectures offer great capabilities embracing conventional enterprise architecture patterns and designs. These traditional systems provide robust transaction oriented environments that are in large part focused on data and host processors. Typically, these implementations require that an entire application be deployed on every machine that will be used as a compute resource. In order for this to happen, the application is usually taken down, installed and started with all systems in-sync and knowing about each other. Static environments such as these present an extremely difficult environment to setup, deploy and administer.
NASA Astrophysics Data System (ADS)
Winney, Peter E.
1989-07-01
A standard 660MW turbo-alternator, operated by the CEGB, runs at an energy conversion efficiency of about 38%. In addition to the 660MW electrical power, 600MW of waste thermal power is generated which has to be dissipated via water cooled heat exchangers. A typical 2000MW station has a requirement of about 1.3 billion gallons of cooling water per day. This is more than the daily throughput of most of our rivers and so inland stations are equipped with cooling towers to dump heat from the coolant.
Super-resolution optics for virtual reality
NASA Astrophysics Data System (ADS)
Grabovičkić, Dejan; Benitez, Pablo; Miñano, Juan C.; Zamora, Pablo; Buljan, Marina; Narasimhan, Bharathwaj; Nikolic, Milena I.; Lopez, Jesus; Gorospe, Jorge; Sanchez, Eduardo; Lastres, Carmen; Mohedano, Ruben
2017-06-01
In present commercial Virtual Reality (VR) headsets the resolution perceived is still limited, since the VR pixel density (typically 10-15 pixels/deg) is well below what the human eye can resolve (60 pixels/deg). We present here novel advanced optical design approaches that dramatically increase the perceived resolution of the VR keeping the large FoV required in VR applications. This approach can be applied to a vast number of optical architectures, including some advanced configurations, as multichannel designs. All this is done at the optical design stage, and no eye tracker is needed in the headset.
Selection of plants for phytoremediation of soils contaminated with radionuclides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Entry J.A.; Vance, N.C.; Watrud, L.S.
1996-12-31
Remediation of soil contaminated with radionuclides typically requires that soil be removed from the site and treated with various dispersing and chelating chemicals. Numerous studies have shown that radionuclides are generally not leached from the top 0.4 meters of soil, where plant roots actively accumulate elements. Restoration of large areas of land contaminated with low levels of radionuclides may be feasible using phytoremediation. Criteria for the selection of plants for phytoremediation, molecular approaches to increase radio nuclide uptake, effects of cultural practices on uptake and assessment of environmental effects of phytoremediation will be discussed.
A self-testing dynamic RAM chip
NASA Astrophysics Data System (ADS)
You, Y.; Hayes, J. P.
1985-02-01
A novel approach to making very large dynamic RAM chips self-testing is presented. It is based on two main concepts: on-chip generation of regular test sequences with very high fault coverage, and concurrent testing of storage-cell arrays to reduce overall testing time. The failure modes of a typical 64 K RAM employing one-transistor cells are analyzed to identify their test requirements. A comprehensive test generation algorithm that can be implemented with minimal modification to a standard cell layout is derived. The self-checking peripheral circuits necessary to implement this testing algorithm are described, and the self-testing RAM is briefly evaluated.
NASA Astrophysics Data System (ADS)
Bourrion, O.; Boyer, B.; Derome, L.; Pignol, G.
2016-06-01
We developed a highly integrated and versatile electronic module to equip small nuclear physics experiments and lab teaching classes: the User friendly Configurable Trigger, scaler and delay Module for nuclear and particle physics (UCTM). It is configurable through a Graphical User Interface (GUI) and provides a large number of possible trigger conditions without any Hardware Description Language (HDL) required knowledge. This new version significantly enhances the previous capabilities by providing two additional features: signal digitization and time measurements. The design, performances and a typical application are presented.
Isbaner, Sebastian; Karedla, Narain; Kaminska, Izabela; Ruhlandt, Daja; Raab, Mario; Bohlen, Johann; Chizhik, Alexey; Gregor, Ingo; Tinnefeld, Philip; Enderlein, Jörg; Tsukanov, Roman
2018-04-11
Single-molecule localization based super-resolution microscopy has revolutionized optical microscopy and routinely allows for resolving structural details down to a few nanometers. However, there exists a rather large discrepancy between lateral and axial localization accuracy, the latter typically three to five times worse than the former. Here, we use single-molecule metal-induced energy transfer (smMIET) to localize single molecules along the optical axis, and to measure their axial distance with an accuracy of 5 nm. smMIET relies only on fluorescence lifetime measurements and does not require additional complex optical setups.
Conceptual design studies for large free-flying solar-reflector spacecraft
NASA Technical Reports Server (NTRS)
Hedgepeth, J. M.; Miller, R. K.; Knapp, K. P. W.
1981-01-01
The 1 km diameter reflecting film surface is supported by a lightweight structure which may be automatically deployed after launch in the Space Shuttle. A twin rotor, control moment gyroscope, with deployable rotors, is included as a primary control actuator. The vehicle has a total specific mass of less than 12 g/sq m including allowances for all required subsystems. The structural elements were sized to accommodate the loads of a typical SOLARES type mission where a swam of these free flying satellites is employed to concentrate sunlight on a number of energy conversion stations on the ground.
Youngclaus, James A; Koehler, Paul A; Kotlikoff, Laurence J; Wiecha, John M
2013-01-01
Some discussions of physician specialty choice imply that indebted medical students avoid choosing primary care because education debt repayment seems economically unfeasible. The authors analyzed whether a physician earning a typical primary care salary can repay the current median level of education debt and meet standard household expenses without incurring additional debt. In 2010-2011, the authors used comprehensive financial planning software to model the annual finances for a fictional physician's household to compare the impact of various debt levels, repayment plans, and living expenses across three specialties. To accurately develop this spending model, they used published data from federal and local agencies, real estate sources, and national organizations. Despite growing debt levels, the authors found that physicians in all specialties can repay the current level of education debt without incurring more debt. However, some scenarios, typically those with higher borrowing levels, required trade-offs and compromises. For example, extended repayment plans require large increases in the total amount of interest repaid and the number of repayment years required, and the use of a federal loan forgiveness/repayment program requires a service obligation such as working at a nonprofit or practicing in a medically underserved area. A primary care career remains financially viable for medical school graduates with median levels of education debt. Graduates pursuing primary care with higher debt levels need to consider additional strategies to support repayment such as extended repayment terms, use of a federal loan forgiveness/repayment program, or not living in the highest-cost areas.
The global Cretaceous-Tertiary fire: Biomass or fossil carbon
NASA Technical Reports Server (NTRS)
Gilmour, Iain; Guenther, Frank
1988-01-01
The global soot layer at the K-T boundary indicates a major fire triggered by meteorite impact. However, it is not clear whether the principal fuel was biomass or fossil carbon. Forests are favored by delta value of C-13, which is close to the average for trees, but the total amount of elemental C is approximately 10 percent of the present living carbon, and thus requires very efficient conversion to soot. The PAH was analyzed at Woodside Creek, in the hope of finding a diagnostic molecular marker. A promising candidate is 1-methyl-7-isopropyl phenanthrene (retene,), which is probably derived by low temperature degradation of abietic acid. Unlike other PAH that form by pyrosynthesis at higher temperatures, retene has retained the characteristic side chains of its parent molecule. A total of 11 PAH compounds were identified in the boundary clay. Retene is present in substantial abundance. The identification was confirmed by analysis of a retene standard. Retene is characteristic of the combustion of resinous higher plants. Its formation depends on both temperature and oxygen access, and is apparently highest in oxygen-poor fires. Such fires would also produce soot more efficiently which may explain the high soot abundance. The relatively high level of coronene is not typical of a wood combustion source, however, though it can be produced during high temperature pyrolysis of methane, and presumably other H, C-containing materials. This would require large, hot, low O2 zones, which may occur only in very large fires. The presence of retene indicates that biomass was a significant fuel source for the soot at the Cretaceous-Tertiary boundary. The total amount of elemental C produced requires a greater than 3 percent soot yield, which is higher than typically observed for wildfires. However, retene and presumably coronene imply limited access of O2 and hence high soot yield.
Barnes maze testing strategies with small and large rodent models.
Rosenfeld, Cheryl S; Ferguson, Sherry A
2014-02-26
Spatial learning and memory of laboratory rodents is often assessed via navigational ability in mazes, most popular of which are the water and dry-land (Barnes) mazes. Improved performance over sessions or trials is thought to reflect learning and memory of the escape cage/platform location. Considered less stressful than water mazes, the Barnes maze is a relatively simple design of a circular platform top with several holes equally spaced around the perimeter edge. All but one of the holes are false-bottomed or blind-ending, while one leads to an escape cage. Mildly aversive stimuli (e.g. bright overhead lights) provide motivation to locate the escape cage. Latency to locate the escape cage can be measured during the session; however, additional endpoints typically require video recording. From those video recordings, use of automated tracking software can generate a variety of endpoints that are similar to those produced in water mazes (e.g. distance traveled, velocity/speed, time spent in the correct quadrant, time spent moving/resting, and confirmation of latency). Type of search strategy (i.e. random, serial, or direct) can be categorized as well. Barnes maze construction and testing methodologies can differ for small rodents, such as mice, and large rodents, such as rats. For example, while extra-maze cues are effective for rats, smaller wild rodents may require intra-maze cues with a visual barrier around the maze. Appropriate stimuli must be identified which motivate the rodent to locate the escape cage. Both Barnes and water mazes can be time consuming as 4-7 test trials are typically required to detect improved learning and memory performance (e.g. shorter latencies or path lengths to locate the escape platform or cage) and/or differences between experimental groups. Even so, the Barnes maze is a widely employed behavioral assessment measuring spatial navigational abilities and their potential disruption by genetic, neurobehavioral manipulations, or drug/ toxicant exposure.
Clinical test responses to different orthoptic exercise regimes in typical young adults
Horwood, Anna; Toor, Sonia
2014-01-01
Purpose The relative efficiency of different eye exercise regimes is unclear, and in particular the influences of practice, placebo and the amount of effort required are rarely considered. This study measured conventional clinical measures following different regimes in typical young adults. Methods A total of 156 asymptomatic young adults were directed to carry out eye exercises three times daily for 2 weeks. Exercises were directed at improving blur responses (accommodation), disparity responses (convergence), both in a naturalistic relationship, convergence in excess of accommodation, accommodation in excess of convergence, and a placebo regime. They were compared to two control groups, neither of which were given exercises, but the second of which were asked to make maximum effort during the second testing. Results Instruction set and participant effort were more effective than many exercises. Convergence exercises independent of accommodation were the most effective treatment, followed by accommodation exercises, and both regimes resulted in changes in both vergence and accommodation test responses. Exercises targeting convergence and accommodation working together were less effective than those where they were separated. Accommodation measures were prone to large instruction/effort effects and monocular accommodation facility was subject to large practice effects. Conclusions Separating convergence and accommodation exercises seemed more effective than exercising both systems concurrently and suggests that stimulation of accommodation and convergence may act in an additive fashion to aid responses. Instruction/effort effects are large and should be carefully controlled if claims for the efficacy of any exercise regime are to be made. PMID:24471739
Systems Engineering and Reusable Avionics
NASA Technical Reports Server (NTRS)
Conrad, James M.; Murphy, Gloria
2010-01-01
One concept for future space flights is to construct building blocks for a wide variety of avionics systems. Once a unit has served its original purpose, it can be removed from the original vehicle and reused in a similar or dissimilar function, depending on the function blocks the unit contains. For example: Once a lunar lander has reached the moon's surface, an engine controller for the Lunar Decent Module would be removed and used for a lunar rover motor control unit or for a Environmental Control Unit for a Lunar Habitat. This senior design project included the investigation of a wide range of functions of space vehicles and possible uses. Specifically, this includes: (1) Determining and specifying the basic functioning blocks of space vehicles. (2) Building and demonstrating a concept model. (3) Showing high reliability is maintained. The specific implementation of this senior design project included a large project team made up of Systems, Electrical, Computer, and Mechanical Engineers/Technologists. The efforts were made up of several sub-groups that each worked on a part of the entire project. The large size and complexity made this project one of the more difficult to manage and advise. Typical projects only have 3-4 students, but this project had 10 students from five different disciplines. This paper describes the difference of this large project compared to typical projects, and the challenges encountered. It also describes how the systems engineering approach was successfully implemented so that the students were able to meet nearly all of the project requirements.
The prevalence and geographic distribution of complex co-occurring disorders: a population study.
Somers, J M; Moniruzzaman, A; Rezansoff, S N; Brink, J; Russolillo, A
2016-06-01
A subset of people with co-occurring substance use and mental disorders require coordinated support from health, social welfare and justice agencies to achieve diversion from homelessness, criminal recidivism and further health and social harms. Integrated models of care are typically concentrated in large urban centres. The present study aimed to empirically measure the prevalence and distribution of complex co-occurring disorders (CCD) in a large geographic region that includes urban as well as rural and remote settings. Linked data were examined in a population of roughly 3.7 million adults. Inclusion criteria for the CCD subpopulation were: physician diagnosed substance use and mental disorders; psychiatric hospitalisation; shelter assistance; and criminal convictions. Prevalence per 100 000 was calculated in 91 small areas representing urban, rural and remote settings. 2202 individuals met our inclusion criteria for CCD. Participants had high rates of hospitalisation (8.2 admissions), criminal convictions (8.6 sentences) and social assistance payments (over $36 000 CDN) in the past 5 years. There was wide variability in the geographic distribution of people with CCD, with high prevalence rates in rural and remote settings. People with CCD are not restricted to areas with large populations or to urban settings. The highest per capita rates of CCD were observed in relatively remote locations, where mental health and substance use services are typically in limited supply. Empirically supported interventions must be adapted to meet the needs of people living outside of urban settings with high rates of CCD.
Avian movements and wetland connectivity in landscape conservation
Haig, Susan M.; Mehlman, D.W.; Oring, L.W.
1998-01-01
The current conservation crisis calls for research and management to be carried out on a long-term, multi-species basis at large spatial scales. Unfortunately, scientists, managers, and agencies often are stymied in their effort to conduct these large-scale studies because of a lack of appropriate technology, methodology, and funding. This issue is of particular concern in wetland conservation, for which the standard landscape approach may include consideration of a large tract of land but fail to incorporate the suite of wetland sites frequently used by highly mobile organisms such as waterbirds (e.g., shorebirds, wading birds, waterfowl). Typically, these species have population dynamics that require use of multiple wetlands, but this aspect of their life history has often been ignored in planning for their conservation. We outline theoretical, empirical, modeling, and planning problems associated with this issue and suggest solutions to some current obstacles. These solutions represent a tradeoff between typical in-depth single-species studies and more generic multi-species studies. They include studying within- and among-season movements of waterbirds on a spatial scale appropriate to both widely dispersing and more stationary species; multi-species censuses at multiple sites; further development and use of technology such as satellite transmitters and population-specific molecular markers; development of spatially explicit population models that consider within-season movements of waterbirds; and recognition from funding agencies that landscape-level issues cannot adequately be addressed without support for these types of studies.
NASA Astrophysics Data System (ADS)
McClenaghan, J.; Garofalo, A. M.; Meneghini, O.; Smith, S. P.
2016-10-01
Transport modeling of a proposed ITER steady-state scenario based on DIII-D high βP discharges finds that the core confinement may be improved with either sufficient rotation or a negative central shear q-profile. The high poloidal beta scenario is characterized by a large bootstrap current fraction( 80%) which reduces the demands on the external current drive, and a large radius internal transport barrier which is associated with improved normalized confinement. Typical temperature and density profiles from the non-inductive high poloidal beta scenario on DIII-D are scaled according to 0D modeling predictions of the requirements for achieving Q=5 steady state performance in ITER with ``day one'' H&CD capabilities. Then, TGLF turbulence modeling is carried out under systematic variations of the toroidal rotation and the core q-profile. Either strong negative central magnetic shear or rotation are found to successfully provide the turbulence suppression required to maintain the temperature and density profiles. This work supported by the US Department of Energy under DE-FC02-04ER54698.
NASA Astrophysics Data System (ADS)
1994-07-01
Satellites have shrunk the world to the size of the proverbial global village. They track weather and the traffic patterns of ships and aircraft, and monitor our environment. Defense satellites provide high-resolution images of objects on the ground to protect our troops and allies. Telecommunication satellites have interwoven business sectors, corporations, and markets into global networks. Nevertheless, orbiting satellites may not be the best choice for all applications requiring a high vantage point. satellites and their payloads are expensive, and launching them by rocket is expensive and risky. They must operate in the extreme conditions of space, bombarded by radiation and with no airflow to cool their electronics. Only valuable, long-term missions would seem to justify the expense and risk of a satellite. Even then, satellites are not always the best choice. They typically cannot hop to a new orbit, and some uses, such as local and global communications, require a large number of satellites to ensure adequate coverage. There is clearly a large potential role for high-altitude, atmospheric vehicles that can stay aloft for very long periods (weeks or months) and can roam virtually anywhere.
Families of FPGA-Based Accelerators for Approximate String Matching1
Van Court, Tom; Herbordt, Martin C.
2011-01-01
Dynamic programming for approximate string matching is a large family of different algorithms, which vary significantly in purpose, complexity, and hardware utilization. Many implementations have reported impressive speed-ups, but have typically been point solutions – highly specialized and addressing only one or a few of the many possible options. The problem to be solved is creating a hardware description that implements a broad range of behavioral options without losing efficiency due to feature bloat. We report a set of three component types that address different parts of the approximate string matching problem. This allows each application to choose the feature set required, then make maximum use of the FPGA fabric according to that application’s specific resource requirements. Multiple, interchangeable implementations are available for each component type. We show that these methods allow the efficient generation of a large, if not complete, family of accelerators for this application. This flexibility was obtained while retaining high performance: We have evaluated a sample against serial reference codes and found speed-ups of from 150× to 400× over a high-end PC. PMID:21603598
Folding Proteins at 500 ns/hour with Work Queue.
Abdul-Wahid, Badi'; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A
2012-10-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour.
Folding Proteins at 500 ns/hour with Work Queue
Abdul-Wahid, Badi’; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A.
2014-01-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour. PMID:25540799
Improving the accuracy of walking piezo motors.
den Heijer, M; Fokkema, V; Saedi, A; Schakel, P; Rost, M J
2014-05-01
Many application areas require ultraprecise, stiff, and compact actuator systems with a high positioning resolution in combination with a large range as well as a high holding and pushing force. One promising solution to meet these conflicting requirements is a walking piezo motor that works with two pairs of piezo elements such that the movement is taken over by one pair, once the other pair reaches its maximum travel distance. A resolution in the pm-range can be achieved, if operating the motor within the travel range of one piezo pair. However, applying the typical walking drive signals, we measure jumps in the displacement up to 2.4 μm, when the movement is given over from one piezo pair to the other. We analyze the reason for these large jumps and propose improved drive signals. The implementation of our new drive signals reduces the jumps to less than 42 nm and makes the motor ideally suitable to operate as a coarse approach motor in an ultra-high vacuum scanning tunneling microscope. The rigidity of the motor is reflected in its high pushing force of 6.4 N.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Juhee; Lee, Sungpyo; Lee, Moo Hyung
Quasi-unipolar non-volatile organic transistor memory (NOTM) can combine the best characteristics of conventional unipolar and ambipolar NOTMs and, as a result, exhibit improved device performance. Unipolar NOTMs typically exhibit a large signal ratio between the programmed and erased current signals but also require a large voltage to program and erase the memory cells. Meanwhile, an ambipolar NOTM can be programmed and erased at lower voltages, but the resulting signal ratio is small. By embedding a discontinuous n-type fullerene layer within a p-type pentacene film, quasi-unipolar NOTMs are fabricated, of which the signal storage utilizes both electrons and holes while themore » electrical signal relies on only hole conduction. These devices exhibit superior memory performance relative to both pristine unipolar pentacene devices and ambipolar fullerene/pentacene bilayer devices. The quasi-unipolar NOTM exhibited a larger signal ratio between the programmed and erased states while also reducing the voltage required to program and erase a memory cell. This simple approach should be readily applicable for various combinations of advanced organic semiconductors that have been recently developed and thereby should make a significant impact on organic memory research.« less
The adaption and use of research codes for performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebetrau, A.M.
1987-05-01
Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less
Investigation of Propulsion System Requirements for Spartan Lite
NASA Technical Reports Server (NTRS)
Urban, Mike; Gruner, Timothy; Morrissey, James; Sneiderman, Gary
1998-01-01
This paper discusses the (chemical or electric) propulsion system requirements necessary to increase the Spartan Lite science mission lifetime to over a year. Spartan Lite is an extremely low-cost (less than 10 M) spacecraft bus being developed at the NASA Goddard Space Flight Center to accommodate sounding rocket class (40 W, 45 kg, 35 cm dia by 1 m length) payloads. While Spartan Lite is compatible with expendable launch vehicles, most missions are expected to be tertiary payloads deployed by. the Space Shuttle. To achieve a one year or longer mission life from typical Shuttle orbits, some form of propulsion system is required. Chemical propulsion systems (characterized by high thrust impulsive maneuvers) and electrical propulsion systems (characterized by low-thrust long duration maneuvers and the additional requirement for electrical power) are discussed. The performance of the Spartan Lite attitude control system in the presence of large disturbance torques is evaluated using the Trectops(Tm) dynamic simulator. This paper discusses the performance goals and resource constraints for candidate Spartan Lite propulsion systems and uses them to specify quantitative requirements against which the systems are evaluated.
Micro-precision control/structure interaction technology for large optical space systems
NASA Technical Reports Server (NTRS)
Sirlin, Samuel W.; Laskin, Robert A.
1993-01-01
The CSI program at JPL is chartered to develop the structures and control technology needed for sub-micron level stabilization of future optical space systems. The extreme dimensional stability required for such systems derives from the need to maintain the alignment and figure of critical optical elements to a small fraction (typically 1/20th to 1/50th) of the wavelength of detected radiation. The wavelength is about 0.5 micron for visible light and 0.1 micron for ultra-violet light. This lambda/50 requirement is common to a broad class of optical systems including filled aperture telescopes (with monolithic or segmented primary mirrors), sparse aperture telescopes, and optical interferometers. The challenge for CSI arises when such systems become large, with spatially distributed optical elements mounted on a lightweight, flexible structure. In order to better understand the requirements for micro-precision CSI technology, a representative future optical system was identified and developed as an analytical testbed for CSI concepts and approaches. An optical interferometer was selected as a stressing example of the relevant mission class. The system that emerged was termed the Focus Mission Interferometer (FMI). This paper will describe the multi-layer control architecture used to address the FMI's nanometer level stabilization requirements. In addition the paper will discuss on-going and planned experimental work aimed at demonstrating that multi-layer CSI can work in practice in the relevant performance regime.
NASA Technical Reports Server (NTRS)
Cramer, K. E.; Winfree, W. P.
2005-01-01
The Nondestructive Evaluation Sciences Branch at NASA s Langley Research Center has been actively involved in the development of thermographic inspection techniques for more than 15 years. Since the Space Shuttle Columbia accident, NASA has focused on the improvement of advanced NDE techniques for the Reinforced Carbon-Carbon (RCC) panels that comprise the orbiter s wing leading edge. Various nondestructive inspection techniques have been used in the examination of the RCC, but thermography has emerged as an effective inspection alternative to more traditional methods. Thermography is a non-contact inspection method as compared to ultrasonic techniques which typically require the use of a coupling medium between the transducer and material. Like radiographic techniques, thermography can be used to inspect large areas, but has the advantage of minimal safety concerns and the ability for single-sided measurements. Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. A typical implementation of PCA is when the eigenvectors are generated from the data set being analyzed. Although it is a powerful tool for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the "good" material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued where a fixed set of eigenvectors, generated from an analytic model of the thermal response of the material under examination, is used to process the thermal data from the RCC materials. Details of a one-dimensional analytic model and a two-dimensional finite-element model will be presented. An overview of the PCA process as well as a quantitative signal-to-noise comparison of the results of performing both embodiments of PCA on thermographic data from various RCC specimens will be shown. Finally, a number of different applications of this technology to various RCC components will be presented.
Miall, A.D.; Turner-Peterson, C. E.
1989-01-01
Techniques of architectural element analysis and lateral profiling have been applied to the fluvial Westwater Canyon Member of the Morrison Formation (Jurassic) in southern San Juan Basin. On a large scale, the sandstone-body architecture consists mainly of a series of tabular sandstone sheets 5-15 m thick and hundreds of meters wide, separated by thin fine-grained units. Internally these sheets contain lateral accretion surfaces and are cut by channels 10-20 m deep and at least 250 m wide. On a more detailed scale, interpretations made from large-scale photomosaics show a complex of architectural elements and bounding surfaces. Typical indicators of moderate- to high-sinuosity channels (lateral accretion deposits) coexist in the same outcrop with downstream-accreted macroform deposits that are typical of sand flats of low-sinuosity, multiple-channel rivers. Broad, deep channels with gently to steeply dipping margins were mapped in several of the outcrops by carefully tracing major bounding surfaces. Locally thick accumulations of plane-laminated and low-angle cross-laminated sandstone lithofacies suggest rapid flow, probably transitional to upper flow regime conditions. Such a depositional style is most typical of ephemeral rivers or those periodically undergoing major seasonal (or more erratic) stage fluctuations, an interpretation consistent with independent mineralogical evidence of aridity. Fining-upward sequences are rare in the project area, contrary to the descriptions of Campbell (1976). The humid alluvial fan model of Galloway (1978) cannot be substantiated and, similarly, the architectural model of Campbell (1976) requires major revision. Comparisons with the depositional architecture of the large Indian rivers, such as the Ganges and Brahmaputra, still seem reasonable, as originally proposed by Campbell (1976), although there is now convincing evidence for aridity and for major stage fluctuations, which differs both from those modern rivers and Campbell's interpretation. ?? 1989.
UFO: a web server for ultra-fast functional profiling of whole genome protein sequences.
Meinicke, Peter
2009-09-02
Functional profiling is a key technique to characterize and compare the functional potential of entire genomes. The estimation of profiles according to an assignment of sequences to functional categories is a computationally expensive task because it requires the comparison of all protein sequences from a genome with a usually large database of annotated sequences or sequence families. Based on machine learning techniques for Pfam domain detection, the UFO web server for ultra-fast functional profiling allows researchers to process large protein sequence collections instantaneously. Besides the frequencies of Pfam and GO categories, the user also obtains the sequence specific assignments to Pfam domain families. In addition, a comparison with existing genomes provides dissimilarity scores with respect to 821 reference proteomes. Considering the underlying UFO domain detection, the results on 206 test genomes indicate a high sensitivity of the approach. In comparison with current state-of-the-art HMMs, the runtime measurements show a considerable speed up in the range of four orders of magnitude. For an average size prokaryotic genome, the computation of a functional profile together with its comparison typically requires about 10 seconds of processing time. For the first time the UFO web server makes it possible to get a quick overview on the functional inventory of newly sequenced organisms. The genome scale comparison with a large number of precomputed profiles allows a first guess about functionally related organisms. The service is freely available and does not require user registration or specification of a valid email address.
NASA Astrophysics Data System (ADS)
Irving, D. H.; Rasheed, M.; Hillman, C.; O'Doherty, N.
2012-12-01
Oilfield management is moving to a more operational footing with near-realtime seismic and sensor monitoring governing drilling, fluid injection and hydrocarbon extraction workflows within safety, productivity and profitability constraints. To date, the geoscientific analytical architectures employed are configured for large volumes of data, computational power or analytical latency and compromises in system design must be made to achieve all three aspects. These challenges are encapsulated by the phrase 'Big Data' which has been employed for over a decade in the IT industry to describe the challenges presented by data sets that are too large, volatile and diverse for existing computational architectures and paradigms. We present a data-centric architecture developed to support a geoscientific and geotechnical workflow whereby: ●scientific insight is continuously applied to fresh data ●insights and derived information are incorporated into engineering and operational decisions ●data governance and provenance are routine within a broader data management framework Strategic decision support systems in large infrastructure projects such as oilfields are typically relational data environments; data modelling is pervasive across analytical functions. However, subsurface data and models are typically non-relational (i.e. file-based) in the form of large volumes of seismic imaging data or rapid streams of sensor feeds and are analysed and interpreted using niche applications. The key architectural challenge is to move data and insight from a non-relational to a relational, or structured, data environment for faster and more integrated analytics. We describe how a blend of MapReduce and relational database technologies can be applied in geoscientific decision support, and the strengths and weaknesses of each in such an analytical ecosystem. In addition we discuss hybrid technologies that use aspects of both and translational technologies for moving data and analytics across these platforms. Moving to a data-centric architecture requires data management methodologies to be overhauled by default and we show how end-to-end data provenancing and dependency management is implicit in such an environment and how it benefits system administration as well as the user community. Whilst the architectural experiences are drawn from the oil industry, we believe that they are more broadly applicable in academic and government settings where large volumes of data are added to incrementally and require revisiting with low analytical latency and we suggest application to earthquake monitoring and remote sensing networks.
NASA Astrophysics Data System (ADS)
Nelson, Johanna; Yang, Yuan; Misra, Sumohan; Andrews, Joy C.; Cui, Yi; Toney, Michael F.
2013-09-01
Radiation damage is a topic typically sidestepped in formal discussions of characterization techniques utilizing ionizing radiation. Nevertheless, such damage is critical to consider when planning and performing experiments requiring large radiation doses or radiation sensitive samples. High resolution, in situ transmission X-ray microscopy of Li-ion batteries involves both large X-ray doses and radiation sensitive samples. To successfully identify changes over time solely due to an applied current, the effects of radiation damage must be identified and avoided. Although radiation damage is often significantly sample and instrument dependent, the general procedure to identify and minimize damage is transferable. Here we outline our method of determining and managing the radiation damage observed in lithium sulfur batteries during in situ X-ray imaging on the transmission X-ray microscope at Stanford Synchrotron Radiation Lightsource.
SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop
Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo
2014-01-01
Summary: Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig’s scalability over many computing nodes and illustrate its use with example scripts. Availability and Implementation: Available under the open source MIT license at http://sourceforge.net/projects/seqpig/ Contact: andre.schumacher@yahoo.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24149054
Shomaker, Lauren B; Tanofsky-Kraff, Marian; Zocca, Jaclyn M; Courville, Amber; Kozlosky, Merel; Columbo, Kelli M; Wolkoff, Laura E; Brady, Sheila M; Crocker, Melissa K; Ali, Asem H; Yanovski, Susan Z; Yanovski, Jack A
2010-10-01
Eating in the absence of hunger (EAH) is typically assessed by measuring youths' intake of palatable snack foods after a standard meal designed to reduce hunger. Because energy intake required to reach satiety varies among individuals, a standard meal may not ensure the absence of hunger among participants of all weight strata. The objective of this study was to compare adolescents' EAH observed after access to a very large food array with EAH observed after a standardized meal. Seventy-eight adolescents participated in a randomized crossover study during which EAH was measured as intake of palatable snacks after ad libitum access to a very large array of lunch-type foods (>10,000 kcal) and after a lunch meal standardized to provide 50% of the daily estimated energy requirements. The adolescents consumed more energy and reported less hunger after the large-array meal than after the standardized meal (P values < 0.001). They consumed ≈70 kcal less EAH after the large-array meal than after the standardized meal (295 ± 18 compared with 365 ± 20 kcal; P < 0.001), but EAH intakes after the large-array meal and after the standardized meal were positively correlated (P values < 0.001). The body mass index z score and overweight were positively associated with EAH in both paradigms after age, sex, race, pubertal stage, and meal intake were controlled for (P values ≤ 0.05). EAH is observable and positively related to body weight regardless of whether youth eat in the absence of hunger from a very large-array meal or from a standardized meal. This trial was registered at clinicaltrials.gov as NCT00631644.
Fast steering and quick positioning of large field-of-regard, two-axis, four-gimbaled sight
NASA Astrophysics Data System (ADS)
Ansari, Zahir Ahmed; Nigam, Madhav Ji; Kumar, Avnish
2017-07-01
Fast steering and quick positioning are prime requirements of the current electro-optical tracking system to achieve quick target acquisition. A scheme has been proposed for realizing these features using two-axis, four-gimbaled sight. For steering the line of sight in the stabilization mode, outer gimbal is slaved to the gyro stabilized inner gimbal. Typically, the inner gimbals have direct drives and outer gimbals have geared drives, which result in a mismatch in the acceleration capability of their servo loops. This limits the allowable control bandwidth for the inner gimbal. However, to achieve high stabilization accuracy, high bandwidth control loops are essential. This contradictory requirement has been addressed by designing a suitable command conditioning module for the inner gimbals. Also, large line-of-sight freedom in pitch axis is required to provide a wide area surveillance capacity for airborne application. This leads to a loss of freedom along the yaw axis as the pitch angle goes beyond 70 deg or so. This is addressed by making the outer gimbal master after certain pitch angle. Moreover, a mounting scheme for gyro has been proposed to accomplish yaw axis stabilization for 110-deg pitch angle movement with a single two-axis gyro.
Exploring model based engineering for large telescopes: getting started with descriptive models
NASA Astrophysics Data System (ADS)
Karban, R.; Zamparelli, M.; Bauvir, B.; Koehler, B.; Noethe, L.; Balestra, A.
2008-07-01
Large telescopes pose a continuous challenge to systems engineering due to their complexity in terms of requirements, operational modes, long duty lifetime, interfaces and number of components. A multitude of decisions must be taken throughout the life cycle of a new system, and a prime means of coping with complexity and uncertainty is using models as one decision aid. The potential of descriptive models based on the OMG Systems Modeling Language (OMG SysMLTM) is examined in different areas: building a comprehensive model serves as the basis for subsequent activities of soliciting and review for requirements, analysis and design alike. Furthermore a model is an effective communication instrument against misinterpretation pitfalls which are typical of cross disciplinary activities when using natural language only or free-format diagrams. Modeling the essential characteristics of the system, like interfaces, system structure and its behavior, are important system level issues which are addressed. Also shown is how to use a model as an analysis tool to describe the relationships among disturbances, opto-mechanical effects and control decisions and to refine the control use cases. Considerations on the scalability of the model structure and organization, its impact on the development process, the relation to document-centric structures, style and usage guidelines and the required tool chain are presented.
A texture-based framework for improving CFD data visualization in a virtual environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bivins, Gerrick O'Ron
2005-01-01
In the field of computational fluid dynamics (CFD) accurate representations of fluid phenomena can be simulated hut require large amounts of data to represent the flow domain. Most datasets generated from a CFD simulation can be coarse, ~10,000 nodes or cells, or very fine with node counts on the order of 1,000,000. A typical dataset solution can also contain multiple solutions for each node, pertaining to various properties of the flow at a particular node. Scalar properties such as density, temperature, pressure, and velocity magnitude are properties that are typically calculated and stored in a dataset solution. Solutions are notmore » limited to just scalar properties. Vector quantities, such as velocity, are also often calculated and stored for a CFD simulation. Accessing all of this data efficiently during runtime is a key problem for visualization in an interactive application. Understanding simulation solutions requires a post-processing tool to convert the data into something more meaningful. Ideally, the application would present an interactive visual representation of the numerical data for any dataset that was simulated while maintaining the accuracy of the calculated solution. Most CFD applications currently sacrifice interactivity for accuracy, yielding highly detailed flow descriptions hut limiting interaction for investigating the field.« less
A texture-based frameowrk for improving CFD data visualization in a virtual environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bivins, Gerrick O'Ron
2005-01-01
In the field of computational fluid dynamics (CFD) accurate representations of fluid phenomena can be simulated but require large amounts of data to represent the flow domain. Most datasets generated from a CFD simulation can be coarse, ~ 10,000 nodes or cells, or very fine with node counts on the order of 1,000,000. A typical dataset solution can also contain multiple solutions for each node, pertaining to various properties of the flow at a particular node. Scalar properties such as density, temperature, pressure, and velocity magnitude are properties that are typically calculated and stored in a dataset solution. Solutions aremore » not limited to just scalar properties. Vector quantities, such as velocity, are also often calculated and stored for a CFD simulation. Accessing all of this data efficiently during runtime is a key problem for visualization in an interactive application. Understanding simulation solutions requires a post-processing tool to convert the data into something more meaningful. Ideally, the application would present an interactive visual representation of the numerical data for any dataset that was simulated while maintaining the accuracy of the calculated solution. Most CFD applications currently sacrifice interactivity for accuracy, yielding highly detailed flow descriptions but limiting interaction for investigating the field.« less
Multibody Parachute Flight Simulations for Planetary Entry Trajectories Using "Equilibrium Points"
NASA Technical Reports Server (NTRS)
Raiszadeh, Ben
2003-01-01
A method has been developed to reduce numerical stiffness and computer CPU requirements of high fidelity multibody flight simulations involving parachutes for planetary entry trajectories. Typical parachute entry configurations consist of entry bodies suspended from a parachute, connected by flexible lines. To accurately calculate line forces and moments, the simulations need to keep track of the point where the flexible lines meet (confluence point). In previous multibody parachute flight simulations, the confluence point has been modeled as a point mass. Using a point mass for the confluence point tends to make the simulation numerically stiff, because its mass is typically much less that than the main rigid body masses. One solution for stiff differential equations is to use a very small integration time step. However, this results in large computer CPU requirements. In the method described in the paper, the need for using a mass as the confluence point has been eliminated. Instead, the confluence point is modeled using an "equilibrium point". This point is calculated at every integration step as the point at which sum of all line forces is zero (static equilibrium). The use of this "equilibrium point" has the advantage of both reducing the numerical stiffness of the simulations, and eliminating the dynamical equations associated with vibration of a lumped mass on a high-tension string.
High Efficiency Microwave Power Amplifier: From the Lab to Industry
NASA Technical Reports Server (NTRS)
Sims, William Herbert, III; Bell, Joseph L. (Technical Monitor)
2001-01-01
Since the beginnings of space travel, various microwave power amplifier designs have been employed. These included Class-A, -B, and -C bias arrangements. However, shared limitation of these topologies is the inherent high total consumption of input power associated with the generation of radio frequency (RF)/microwave power. The power amplifier has always been the largest drain for the limited available power on the spacecraft. Typically, the conversion efficiency of a microwave power amplifier is 10 to 20%. For a typical microwave power amplifier of 20 watts, input DC power of at least 100 watts is required. Such a large demand for input power suggests that a better method of RF/microwave power generation is required. The price paid for using a linear amplifier where high linearity is unnecessary includes higher initial and operating costs, lower DC-to-RF conversion efficiency, high power consumption, higher power dissipation and the accompanying need for higher capacity heat removal means, and an amplifier that is more prone to parasitic oscillation. The first use of a higher efficiency mode of power generation was described by Baxandall in 1959. This higher efficiency mode, Class-D, is achieved through distinct switching techniques to reduce the power losses associated with switching, conduction, and gate drive losses of a given transistor.
Hoecker, Christian; Smail, Fiona; Pick, Martin; Weller, Lee; Boies, Adam M
2017-11-06
The floating catalyst chemical vapor deposition (FC-CVD) process permits macro-scale assembly of nanoscale materials, enabling continuous production of carbon nanotube (CNT) aerogels. Despite the intensive research in the field, fundamental uncertainties remain regarding how catalyst particle dynamics within the system influence the CNT aerogel formation, thus limiting effective scale-up. While aerogel formation in FC-CVD reactors requires a catalyst (typically iron, Fe) and a promotor (typically sulfur, S), their synergistic roles are not fully understood. This paper presents a paradigm shift in the understanding of the role of S in the process with new experimental studies identifying that S lowers the nucleation barrier of the catalyst nanoparticles. Furthermore, CNT aerogel formation requires a critical threshold of Fe x C y > 160 mg/m 3 , but is surprisingly independent of the initial catalyst diameter or number concentration. The robustness of the critical catalyst mass concentration principle is proved further by producing CNTs using alternative catalyst systems; Fe nanoparticles from a plasma spark generator and cobaltocene and nickelocene precursors. This finding provides evidence that low-cost and high throughput CNT aerogel routes may be achieved by decoupled and enhanced catalyst production and control, opening up new possibilities for large-scale CNT synthesis.
Review: Feeding conserved forage to horses: recent advances and recommendations.
Harris, P A; Ellis, A D; Fradinho, M J; Jansson, A; Julliand, V; Luthersson, N; Santos, A S; Vervuert, I
2017-06-01
The horse is a non-ruminant herbivore adapted to eating plant-fibre or forage-based diets. Some horses are stabled for most or the majority of the day with limited or no access to fresh pasture and are fed preserved forage typically as hay or haylage and sometimes silage. This raises questions with respect to the quality and suitability of these preserved forages (considering production, nutritional content, digestibility as well as hygiene) and required quantities. Especially for performance horses, forage is often replaced with energy dense feedstuffs which can result in a reduction in the proportion of the diet that is forage based. This may adversely affect the health, welfare, behaviour and even performance of the horse. In the past 20 years a large body of research work has contributed to a better and deeper understanding of equine forage needs and the physiological and behavioural consequences if these are not met. Recent nutrient requirement systems have incorporated some, but not all, of this new knowledge into their recommendations. This review paper amalgamates recommendations based on the latest understanding in forage feeding for horses, defining forage types and preservation methods, hygienic quality, feed intake behaviour, typical nutrient composition, digestion and digestibility as well as health and performance implications. Based on this, consensual applied recommendations for feeding preserved forages are provided.
Beam Steering Devices Reduce Payload Weight
NASA Technical Reports Server (NTRS)
2012-01-01
Scientists have long been able to shift the direction of a laser beam, steering it toward a target, but often the strength and focus of the light is altered. For precision applications, where the quality of the beam cannot be compromised, scientists have typically turned to mechanical steering methods, redirecting the source of the beam by swinging the entire laser apparatus toward the target. Just as the mechanical methods used for turning cars has evolved into simpler, lighter, power steering methods, so has the means by which researchers can direct lasers. Some of the typical contraptions used to redirect lasers are large and bulky, relying on steering gimbals pivoted, rotating supports to shift the device toward its intended target. These devices, some as large and awkward as a piece of heavy luggage, are subject to the same issues confronted by mechanical parts: Components rub, wear out, and get stuck. The poor reliability and bulk not to mention the power requirements to run one of the machines have made mechanical beam steering components less than ideal for use in applications where weight, bulk, and maneuverability are prime concerns, such as on an unmanned aerial vehicle (UAV) or a microscope. The solution to developing reliable, lighter weight, nonmechanical steering methods to replace the hefty steering boxes was to think outside the box, and a NASA research partner did just that by developing a new beam steering method that bends and redirects the beam, as opposed to shifting the entire apparatus. The benefits include lower power requirements, a smaller footprint, reduced weight, and better control and flexibility in steering capabilities. Such benefits are realized without sacrificing aperture size, efficiency, or scanning range, and can be applied to myriad uses: propulsion systems, structures, radiation protection systems, and landing systems.
NASA Astrophysics Data System (ADS)
Shenoy, Dinesh P.; Jones, Terry J.; Packham, Chris; Lopez-Rodriguez, Enrique
2015-07-01
We present 2-5 μm adaptive optics (AO) imaging and polarimetry of the famous hypergiant stars IRC +10420 and VY Canis Majoris. The imaging polarimetry of IRC +10420 with MMT-Pol at 2.2 μ {m} resolves nebular emission with intrinsic polarization of 30%, with a high surface brightness indicating optically thick scattering. The relatively uniform distribution of this polarized emission both radially and azimuthally around the star confirms previous studies that place the scattering dust largely in the plane of the sky. Using constraints on scattered light consistent with the polarimetry at 2.2 μ {m}, extrapolation to wavelengths in the 3-5 μm band predicts a scattered light component significantly below the nebular flux that is observed in our Large Binocular Telescope/LMIRCam 3-5 μm AO imaging. Under the assumption this excess emission is thermal, we find a color temperature of ˜500 K is required, well in excess of the emissivity-modified equilibrium temperature for typical astrophysical dust. The nebular features of VY CMa are found to be highly polarized (up to 60%) at 1.3 μm, again with optically thick scattering required to reproduce the observed surface brightness. This star’s peculiar nebular feature dubbed the “Southwest Clump” is clearly detected in the 3.1 μm polarimetry as well, which, unlike IRC +10420, is consistent with scattered light alone. The high intrinsic polarizations of both hypergiants’ nebulae are compatible with optically thick scattering for typical dust around evolved dusty stars, where the depolarizing effect of multiple scatters is mitigated by the grains’ low albedos. Observations reported here were obtained at the MMT Observatory, a joint facility of the Smithsonian Institution and the University of Arizona.
Batalle, Dafnis; Muñoz-Moreno, Emma; Figueras, Francesc; Bargallo, Nuria; Eixarch, Elisenda; Gratacos, Eduard
2013-12-01
Obtaining individual biomarkers for the prediction of altered neurological outcome is a challenge of modern medicine and neuroscience. Connectomics based on magnetic resonance imaging (MRI) stands as a good candidate to exhaustively extract information from MRI by integrating the information obtained in a few network features that can be used as individual biomarkers of neurological outcome. However, this approach typically requires the use of diffusion and/or functional MRI to extract individual brain networks, which require high acquisition times and present an extreme sensitivity to motion artifacts, critical problems when scanning fetuses and infants. Extraction of individual networks based on morphological similarity from gray matter is a new approach that benefits from the power of graph theory analysis to describe gray matter morphology as a large-scale morphological network from a typical clinical anatomic acquisition such as T1-weighted MRI. In the present paper we propose a methodology to normalize these large-scale morphological networks to a brain network with standardized size based on a parcellation scheme. The proposed methodology was applied to reconstruct individual brain networks of 63 one-year-old infants, 41 infants with intrauterine growth restriction (IUGR) and 22 controls, showing altered network features in the IUGR group, and their association with neurodevelopmental outcome at two years of age by means of ordinal regression analysis of the network features obtained with Bayley Scale for Infant and Toddler Development, third edition. Although it must be more widely assessed, this methodology stands as a good candidate for the development of biomarkers for altered neurodevelopment in the pediatric population. © 2013 Elsevier Inc. All rights reserved.
Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.
2017-12-01
Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.
Subsurface Monitoring of CO2 Sequestration - A Review and Look Forward
NASA Astrophysics Data System (ADS)
Daley, T. M.
2012-12-01
The injection of CO2 into subsurface formations is at least 50 years old with large-scale utilization of CO2 for enhanced oil recovery (CO2-EOR) beginning in the 1970s. Early monitoring efforts had limited measurements in available boreholes. With growing interest in CO2 sequestration beginning in the 1990's, along with growth in geophysical reservoir monitoring, small to mid-size sequestration monitoring projects began to appear. The overall goals of a subsurface monitoring plan are to provide measurement of CO2 induced changes in subsurface properties at a range of spatial and temporal scales. The range of spatial scales allows tracking of the location and saturation of the plume with varying detail, while finer temporal sampling (up to continuous) allows better understanding of dynamic processes (e.g. multi-phase flow) and constraining of reservoir models. Early monitoring of small scale pilots associated with CO2-EOR (e.g., the McElroy field and the Lost Hills field), developed many of the methodologies including tomographic imaging and multi-physics measurements. Large (reservoir) scale sequestration monitoring began with the Sleipner and Weyburn projects. Typically, large scale monitoring, such as 4D surface seismic, has limited temporal sampling due to costs. Smaller scale pilots can allow more frequent measurements as either individual time-lapse 'snapshots' or as continuous monitoring. Pilot monitoring examples include the Frio, Nagaoka and Otway pilots using repeated well logging, crosswell imaging, vertical seismic profiles and CASSM (continuous active-source seismic monitoring). For saline reservoir sequestration projects, there is typically integration of characterization and monitoring, since the sites are not pre-characterized resource developments (oil or gas), which reinforces the need for multi-scale measurements. As we move beyond pilot sites, we need to quantify CO2 plume and reservoir properties (e.g. pressure) over large scales, while still obtaining high resolution. Typically the high-resolution (spatial and temporal) tools are deployed in permanent or semi-permanent borehole installations, where special well design may be necessary, such as non-conductive casing for electrical surveys. Effective utilization of monitoring wells requires an approach of modular borehole monitoring (MBM) were multiple measurements can be made. An example is recent work at the Citronelle pilot injection site where an MBM package with seismic, fluid sampling and distributed fiber sensing was deployed. For future large scale sequestration monitoring, an adaptive borehole-monitoring program is proposed.
Combined acoustic and optical trapping
Thalhammer, G.; Steiger, R.; Meinschad, M.; Hill, M.; Bernet, S.; Ritsch-Marte, M.
2011-01-01
Combining several methods for contact free micro-manipulation of small particles such as cells or micro-organisms provides the advantages of each method in a single setup. Optical tweezers, which employ focused laser beams, offer very precise and selective handling of single particles. On the other hand, acoustic trapping with wavelengths of about 1 mm allows the simultaneous trapping of many, comparatively large particles. With conventional approaches it is difficult to fully employ the strengths of each method due to the different experimental requirements. Here we present the combined optical and acoustic trapping of motile micro-organisms in a microfluidic environment, utilizing optical macro-tweezers, which offer a large field of view and working distance of several millimeters and therefore match the typical range of acoustic trapping. We characterize the acoustic trapping forces with the help of optically trapped particles and present several applications of the combined optical and acoustic trapping, such as manipulation of large (75 μm) particles and active particle sorting. PMID:22025990
Intelligent Interfaces for Mining Large-Scale RNAi-HCS Image Databases
Lin, Chen; Mak, Wayne; Hong, Pengyu; Sepp, Katharine; Perrimon, Norbert
2010-01-01
Recently, High-content screening (HCS) has been combined with RNA interference (RNAi) to become an essential image-based high-throughput method for studying genes and biological networks through RNAi-induced cellular phenotype analyses. However, a genome-wide RNAi-HCS screen typically generates tens of thousands of images, most of which remain uncategorized due to the inadequacies of existing HCS image analysis tools. Until now, it still requires highly trained scientists to browse a prohibitively large RNAi-HCS image database and produce only a handful of qualitative results regarding cellular morphological phenotypes. For this reason we have developed intelligent interfaces to facilitate the application of the HCS technology in biomedical research. Our new interfaces empower biologists with computational power not only to effectively and efficiently explore large-scale RNAi-HCS image databases, but also to apply their knowledge and experience to interactive mining of cellular phenotypes using Content-Based Image Retrieval (CBIR) with Relevance Feedback (RF) techniques. PMID:21278820
Thermal modeling and analysis of structurally complex spacecraft using the IDEAS system
NASA Technical Reports Server (NTRS)
Garrett, L. B.
1983-01-01
Large antenna satellites of unprecedented sizes are needed for a number of applications. Antenna diameters on the order of 50 meters and upward are required. Such antennas involve the use of large expanses of lattice structures with hundreds or thousands of individual connecting members. In connection with the design of such structures, the consideration of thermal effects represents a crucial factor. Software capabilities have emerged which are coded to include major first order thermal effects and to purposely ignore, in the interest of computational efficiency, the secondary effects. The Interactive Design and Evaluation of Advanced Spacecraft (IDEAS) is one such system. It has been developed for an employment in connection with thermal-structural interaction analyses related to the design of large structurally complex classes of future spacecraft. An IDEAS overview is presented. Attention is given to a typical antenna analysis using IDEAS, the thermal and loading analyses of a tetrahedral truss spacecraft, and ecliptic and polar orbit analyses.
Enhancement of structural stiffness in MEMS structures
NASA Astrophysics Data System (ADS)
Ilias, Samir; Picard, Francis; Topart, Patrice; Larouche, Carl; Jerominek, Hubert
2006-01-01
Many optical applications require smooth micromirror reflective surfaces with large radius of curvature. Usually when using surface micromachining technology and as a result of residual stress and stress gradient in thin films, the control of residual curvature is a difficult task. In this work, two engineering approaches were developed to enhance structural stiffness of micromirrors. 1) By integrating stiffening structures and thermal annealing. The stiffening structures consist of U-shaped profiles integrated with the mirror (dimension 200×300 μm2). 2) By combining selective electroplating and flip-chip based technologies. Nickel was used as electroplated material with optimal stress values around +/-10 MPa for layer thicknesses of about 10 μm. With the former approach, typical curvature radii of about 1.5 cm and 0.6 cm along mirror width and length were obtained, respectively. With the latter approach, an important improvement in the micromirror planarity and flatness was achieved with curvature radius up to 23 cm and roughness lower than 5 nm rms for typical 1000×1000 μm2 micromirrors.
Gas Ring-Imagining Cherenkov (GRINCH) Detector for the Super BigBite Spectrometer at Jefferson Lab
NASA Astrophysics Data System (ADS)
Averett, Todd; Wojtsekhowski, Bogdan; Amidouch, Abdellah; Danagoulian, Samuel; Niculescu, Gabriel; Niculescu, Ioana; Jefferson Lab SBS Collaboration Collaboration
2017-01-01
A new gas Cherenkov detector is under construction for the upcoming SuperBigBite spectrometer research program in Hall A at Jefferson Lab. The existing BigBite spectrometer is being upgraded to handle expected increases in event rate and background rate due to the increased luminosity required for the experimental program. The detector will primarily be used to separate good electron events from significant pion and electromagnetic contamination. In contrast to typical gas Cherenkov detectors that use large-diameter photomultiplier tubes and charge integrating ADCs, this detector uses an array of 510 small-diameter tubes that are more than 25x less sensitive to background. Cherenkov radiation clusters will be identified in this array using fast TDCs and a narrow timing window relative to typical ADC gates. In addition, a new FPGA-based DAQ system is being tested to provide a PID trigger using real-time cluster finding. Details of the detector and current status of the project will be presented.
A High-Resolution Measurement of Ball IR Black Paint's Low-Temperature Emissivity
NASA Technical Reports Server (NTRS)
Tuttle, Jim; Canavan, Ed; DiPirro, Mike; Li, Xiaoyi; Franck, Randy; Green, Dan
2011-01-01
High-emissivity paints are commonly used on thermal control system components. The total hemispheric emissivity values of such paints are typically high (nearly 1) at temperatures above about 100 Kelvin, but they drop off steeply at lower temperatures. A precise knowledge of this temperature-dependence is critical to designing passively-cooled components with low operating temperatures. Notable examples are the coatings on thermal radiators used to cool space-flight instruments to temperatures below 40 Kelvin. Past measurements of low-temperature paint emissivity have been challenging, often requiring large thermal chambers and typically producing data with high uncertainties below about 100 Kelvin. We describe a relatively inexpensive method of performing high-resolution emissivity measurements in a small cryostat. We present the results of such a measurement on Ball InfraRed BlackTM(BIRBTM), a proprietary surface coating produced by Ball Aerospace and Technologies Corp (BATC), which is used in spaceflight applications. We also describe a thermal model used in the error analysis.
A tool to estimate bar patterns and flow conditions in estuaries when limited data is available
NASA Astrophysics Data System (ADS)
Leuven, J.; Verhoeve, S.; Bruijns, A. J.; Selakovic, S.; van Dijk, W. M.; Kleinhans, M. G.
2017-12-01
The effects of human interventions, natural evolution of estuaries and rising sea-level on food security and flood safety are largely unknown. In addition, ecologists require quantified habitat area to study future evolution of estuaries, but they lack predictive capability of bathymetry and hydrodynamics. For example, crucial input required for ecological models are values of intertidal area, inundation time, peak flow velocities and salinity. While numerical models can reproduce these spatial patterns, their computational times are long and for each case a new model must be developed. Therefore, we developed a comprehensive set of relations that accurately predict the hydrodynamics and the patterns of channels and bars, using a combination of the empirical relations derived from approximately 50 estuaries and theory for bars and estuaries. The first step is to predict local tidal prisms, which is the tidal prism that flows through a given cross-section. Second, the channel geometry is predicted from tidal prism and hydraulic geometry relations. Subsequently, typical flow velocities can be estimated from the channel geometry and tidal prism. Then, an ideal estuary shape is fitted to the measured planform: the deviation from the ideal shape, which is defined as the excess width, gives a measure of the locations where tidal bars form and their summed width (Leuven et al., 2017). From excess width, typical hypsometries can be predicted per cross-section. In the last step, flow velocities are calculated for the full range of occurring depths and salinity is calculated based on the estuary shape. Here, we will present a prototype tool that predicts equilibrium bar patterns and typical flow conditions. The tool is easy to use because the only input required is the estuary outline and tidal amplitude. Therefore it can be used by policy makers and researchers from multiple disciplines, such as ecologists, geologists and hydrologists, for example for paleogeographic reconstructions.
Long-term drought sensitivity of trees in second-growth forests in a humid region
Neil Pederson; Kacie Tackett; Ryan W. McEwan; Stacy Clark; Adrienne Cooper; Glade Brosi; Ray Eaton; R. Drew Stockwell
2012-01-01
Classical field methods of reconstructing drought using tree rings in humid, temperate regions typically target old trees from drought-prone sites. This approach limits investigators to a handful of species and excludes large amounts of data that might be useful, especially for coverage gaps in large-scale networks. By sampling in more âtypicalâ forests, network...
Hearts, neck posture and metabolic intensity of sauropod dinosaurs.
Seymour, R S; Lillywhite, H B
2000-01-01
Hypothesized upright neck postures in sauropod dinosaurs require systemic arterial blood pressures reaching 700 mmHg at the heart. Recent data on ventricular wall stress indicate that their left ventricles would have weighed 15 times those of similarly sized whales. Such dimensionally, energetically and mechanically disadvantageous ventricles were highly unlikely in an endothermic sauropod. Accessory hearts or a siphon mechanism, with sub-atmospheric blood pressures in the head, were also not feasible. If the blood flow requirements of sauropods were typical of ectotherms, the left-ventricular blood volume and mass would have been smaller; nevertheless, the heart would have suffered the serious mechanical disadvantage of thick walls. It is doubtful that any large sauropod could have raised its neck vertically and endured high arterial blood pressure, and it certainly could not if it had high metabolic rates characteristic of endotherms. PMID:11052540
Approaches for advancing scientific understanding of macrosystems
Levy, Ofir; Ball, Becky A.; Bond-Lamberty, Ben; Cheruvelil, Kendra S.; Finley, Andrew O.; Lottig, Noah R.; Surangi W. Punyasena,; Xiao, Jingfeng; Zhou, Jizhong; Buckley, Lauren B.; Filstrup, Christopher T.; Keitt, Tim H.; Kellner, James R.; Knapp, Alan K.; Richardson, Andrew D.; Tcheng, David; Toomey, Michael; Vargas, Rodrigo; Voordeckers, James W.; Wagner, Tyler; Williams, John W.
2014-01-01
The emergence of macrosystems ecology (MSE), which focuses on regional- to continental-scale ecological patterns and processes, builds upon a history of long-term and broad-scale studies in ecology. Scientists face the difficulty of integrating the many elements that make up macrosystems, which consist of hierarchical processes at interacting spatial and temporal scales. Researchers must also identify the most relevant scales and variables to be considered, the required data resources, and the appropriate study design to provide the proper inferences. The large volumes of multi-thematic data often associated with macrosystem studies typically require validation, standardization, and assimilation. Finally, analytical approaches need to describe how cross-scale and hierarchical dynamics and interactions relate to macroscale phenomena. Here, we elaborate on some key methodological challenges of MSE research and discuss existing and novel approaches to meet them.
Numerical solutions of the complete Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1993-01-01
The objective of this study is to compare the use of assumed pdf (probability density function) approaches for modeling supersonic turbulent reacting flowfields with the more elaborate approach where the pdf evolution equation is solved. Assumed pdf approaches for averaging the chemical source terms require modest increases in CPU time typically of the order of 20 percent above treating the source terms as 'laminar.' However, it is difficult to assume a form for these pdf's a priori that correctly mimics the behavior of the actual pdf governing the flow. Solving the evolution equation for the pdf is a theoretically sound approach, but because of the large dimensionality of this function, its solution requires a Monte Carlo method which is computationally expensive and slow to coverage. Preliminary results show both pdf approaches to yield similar solutions for the mean flow variables.
NASA Astrophysics Data System (ADS)
Burress, Jacob; Bethea, Donald; Troub, Brandon
2017-05-01
The accurate measurement of adsorbed gas up to high pressures (˜100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ˜0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.
Burress, Jacob; Bethea, Donald; Troub, Brandon
2017-05-01
The accurate measurement of adsorbed gas up to high pressures (∼100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ∼0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.
Gas mixture studies for streamer operated Resistive Plate Chambers
NASA Astrophysics Data System (ADS)
Paoloni, A.; Longhin, A.; Mengucci, A.; Pupilli, F.; Ventura, M.
2016-06-01
Resistive Plate Chambers operated in streamer mode are interesting detectors in neutrino and astro-particle physics applications (like OPERA and ARGO experiments). Such experiments are typically characterized by large area apparatuses with no stringent requirements on detector aging and rate capabilities. In this paper, results of cosmic ray tests performed on a RPC prototype using different gas mixtures are presented, the principal aim being the optimization of the TetraFluoroPropene concentration in Argon-based mixtures. The introduction of TetraFluoroPropene, besides its low Global Warming Power, is helpful because it simplifies safety requirements allowing to remove also isobutane from the mixture. Results obtained with mixtures containing SF6, CF4, CO2, N2 and He are also shown, presented both in terms of detectors properties (efficiency, multiple-streamer probability and time resolution) and in terms of streamer characteristics.
NASA Technical Reports Server (NTRS)
Manderscheid, J. M.; Kaufman, A.
1985-01-01
Turbine blades for reusable space propulsion systems are subject to severe thermomechanical loading cycles that result in large inelastic strains and very short lives. These components require the use of anisotropic high-temperature alloys to meet the safety and durability requirements of such systems. To assess the effects on blade life of material anisotropy, cyclic structural analyses are being performed for the first stage high-pressure fuel turbopump blade of the space shuttle main engine. The blade alloy is directionally solidified MAR-M 246 alloy. The analyses are based on a typical test stand engine cycle. Stress-strain histories at the airfoil critical location are computed using the MARC nonlinear finite-element computer code. The MARC solutions are compared to cyclic response predictions from a simplified structural analysis procedure developed at the NASA Lewis Research Center.
Light and lightened mirrors for astronomy
NASA Astrophysics Data System (ADS)
Fappani, Denis
2008-07-01
For ground-based astronomy, more and more large telescopes are emerging all around the world. Similarly to space borne telescopes, for which the use of lightened optics has always been a baseline for purpose of mass reduction of payloads, same kinds of lightened/light mirrors are then now more and more intensively used also for ground-based instrumentation for astronomy, requiring larger and larger components. Through several examples of typical past realizations (class 0.5m-1m) for different astronomical projects requiring light or lightened mirrors for different reasons (optimisation of mass and stiffness, reduction of thermal inertia, increasing of dynamic performance for fast scanning purpose,....), the presentation will point out issues for lightening design, manufacturing and control of such parts, as well as brief overview of the corresponding existing "state of the art" for these technologies in SESO.
Medical students as hospice volunteers: the benefits to a hospice organization.
Setla, Judith; Watson, Linda
2006-01-01
Hospices have regulatory requirements to provide volunteers who can assist families in a variety of ways. Hospices also typically provide large amounts of uncompensated education for students in various life sciences as part of their mission to promote quality care for those at the end-of-life. Separately, there is evidence of the educational benefits of exposing medical students to hospice patients and practices. But little has been published about the costs or benefits such teaching programs incur at the hospices involved. Hospice of Central New York developed a service-learning elective where first-year medical students were trained as volunteers. Despite initial concerns that significant staff time would be required to develop and maintain this elective, it appears to be an efficient way to satisfy the need for volunteers while contributing to the education of the involved students.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, S.J.; Phillips, M.; Etheridge, D.
2012-07-01
Per regulatory agreement and facility closure design, U.S. Department of Energy Hanford Site nuclear fuel cycle structures and materials require in situ isolation in perpetuity and/or interim physicochemical stabilization as a part of final disposal or interim waste removal, respectively. To this end, grout materials are being used to encase facilities structures or are being incorporated within structures containing hazardous and radioactive contaminants. Facilities where grout materials have been recently used for isolation and stabilization include: (1) spent fuel separations, (2) uranium trioxide calcining, (3) reactor fuel storage basin, (4) reactor fuel cooling basin transport rail tanker cars and casks,more » (5) cold vacuum drying and reactor fuel load-out, and (6) plutonium fuel metal finishing. Grout components primarily include: (1) portland cement, (2) fly ash, (3) aggregate, and (4) chemical admixtures. Mix designs for these typically include aggregate and non aggregate slurries and bulk powders. Placement equipment includes: (1) concrete piston line pump or boom pump truck for grout slurry, (2) progressive cavity and shearing vortex pump systems, and (3) extendable boom fork lift for bulk powder dry grout mix. Grout slurries placed within the interior of facilities were typically conveyed utilizing large diameter slick line and the equivalent diameter flexible high pressure concrete conveyance hose. Other facilities requirements dictated use of much smaller diameter flexible grout conveyance hose. Placement required direct operator location within facilities structures in most cases, whereas due to radiological dose concerns, placement has also been completed remotely with significant standoff distances. Grout performance during placement and subsequent to placement often required unique design. For example, grout placed in fuel basin structures to serve as interim stabilization materials required sufficient bearing i.e., unconfined compressive strength, to sustain heavy equipment yet, low breakout force to permit efficient removal by track hoe bucket or equivalent construction equipment. Further, flow of slurries through small orifice geometries of moderate head pressures was another typical design requirement. Phase separation of less than 1 percent was a typical design requirement for slurries. On the order of 30,000 cubic meters of cementitious grout have recently been placed in the above noted U.S. Department of Energy Hanford Site facilities or structures. Each has presented a unique challenge in mix design, equipment, grout injection or placement, and ultimate facility or structure performance. Unconfined compressive and shear strength, flow, density, mass attenuation coefficient, phase separation, air content, wash-out, parameters and others, unique to each facility or structure, dictate the grout mix design for each. Each mix design was tested under laboratory and scaled field conditions as a precursor to field deployment. Further, after injection or placement of each grout formulation, the material was field inspected either by standard laboratory testing protocols, direct physical evaluation, or both. (authors)« less
Evaluation of genotoxicity testing of FDA approved large molecule therapeutics.
Sawant, Satin G; Fielden, Mark R; Black, Kurt A
2014-10-01
Large molecule therapeutics (MW>1000daltons) are not expected to enter the cell and thus have reduced potential to interact directly with DNA or related physiological processes. Genotoxicity studies are therefore not relevant and typically not required for large molecule therapeutic candidates. Regulatory guidance supports this approach; however there are examples of marketed large molecule therapeutics where sponsors have conducted genotoxicity studies. A retrospective analysis was performed on genotoxicity studies of United States FDA approved large molecule therapeutics since 1998 identified through the Drugs@FDA website. This information was used to provide a data-driven rationale for genotoxicity evaluations of large molecule therapeutics. Fifty-three of the 99 therapeutics identified were tested for genotoxic potential. None of the therapeutics tested showed a positive outcome in any study except the peptide glucagon (GlucaGen®) showing equivocal in vitro results, as stated in the product labeling. Scientific rationale and data from this review indicate that testing of a majority of large molecule modalities do not add value to risk assessment and support current regulatory guidance. Similarly, the data do not support testing of peptides containing only natural amino acids. Peptides containing non-natural amino acids and small molecules in conjugated products may need to be tested. Copyright © 2014 Elsevier Inc. All rights reserved.
Cine-servo lens technology for 4K broadcast and cinematography
NASA Astrophysics Data System (ADS)
Nurishi, Ryuji; Wakazono, Tsuyoshi; Usui, Fumiaki
2015-09-01
Central to the rapid evolution of 4K image capture technology in the past few years, deployment of large-format cameras with Super35mm Single Sensors is increasing in TV production for diverse shows such as dramas, documentaries, wildlife, and sports. While large format image capture has been the standard in the cinema world for quite some time, the recent experiences within the broadcast industry have revealed a variety of requirement differences for large format lenses compared to those of the cinema industry. A typical requirement for a broadcast lens is a considerably higher zoom ratio in order to avoid changing lenses in the middle of a live event, which is mostly not the case for traditional cinema productions. Another example is the need for compact size, light weight, and servo operability for a single camera operator shooting in a shoulder-mount ENG style. On the other hand, there are new requirements that are common to both worlds, such as smooth and seamless change in angle of view throughout the long zoom range, which potentially offers new image expression that never existed in the past. This paper will discuss the requirements from the two industries of cinema and broadcast, while at the same time introducing the new technologies and new optical design concepts applied to our latest "CINE-SERVO" lens series which presently consists of two models, CN7x17KAS-S and CN20x50IAS-H. It will further explain how Canon has realized 4K optical performance and fast servo control while simultaneously achieving compact size, light weight and high zoom ratio, by referring to patent-pending technologies such as the optical power layout, lens construction, and glass material combinations.
Medical physics aspects of cancer care in the Asia Pacific region
Kron, T; Cheung, KY; Dai, J; Ravindran, P; Soejoko, D; Inamura, K; Song, JY; Bold, L; Srivastava, R; Rodriguez, L; Wong, TJ; Kumara, A; Lee, CC; Krisanachinda, A; Nguyen, XC; Ng, KH
2008-01-01
Medical physics plays an essential role in modern medicine. This is particularly evident in cancer care where medical physicists are involved in radiotherapy treatment planning and quality assurance as well as in imaging and radiation protection. Due to the large variety of tasks and interests, medical physics is often subdivided into specialties such as radiology, nuclear medicine and radiation oncology medical physics. However, even within their specialty, the role of radiation oncology medical physicists (ROMPs) is diverse and varies between different societies. Therefore, a questionnaire was sent to leading medical physicists in most countries/areas in the Asia/Pacific region to determine the education, role and status of medical physicists. Answers were received from 17 countries/areas representing nearly 2800 radiation oncology medical physicists. There was general agreement that medical physicists should have both academic (typically at MSc level) and clinical (typically at least 2 years) training. ROMPs spent most of their time working in radiotherapy treatment planning (average 17 hours per week); however radiation protection and engineering tasks were also common. Typically, only physicists in large centres are involved in research and teaching. Most respondents thought that the workload of physicists was high, with more than 500 patients per year per physicist, less than one ROMP per two oncologists being the norm, and on average, one megavoltage treatment unit per medical physicist. There was also a clear indication of increased complexity of technology in the region with many countries/areas reporting to have installed helical tomotherapy, IMRT (Intensity Modulated Radiation Therapy), IGRT (Image Guided Radiation Therapy), Gamma-knife and Cyber-knife units. This and the continued workload from brachytherapy will require growing expertise and numbers in the medical physics workforce. Addressing these needs will be an important challenge for the future. PMID:21611001
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...
2018-04-30
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Control of large thermal distortions in a cryogenic wind tunnel
NASA Technical Reports Server (NTRS)
Gustafson, J. C.
1983-01-01
The National Transonic Facility (NTF) is a research wind tunnel capable of operation at temperatures down to 89K (160 R) and pressures up to 900,000 Pa (9 atmospheres) to achieve Reynolds numbers approaching 120,000,000. Wide temperature excursions combined with the precise alignment requirements of the tunnel aerodynamic surfaces imposed constraints on the mechanisms supporting the internal structures of the tunnel. The material selections suitable for this application were also limited. A general design philosophy of utilizing a single fixed point for each linear degree of freedom and guiding the expansion as required was adopted. These support systems allow thermal expansion to take place in a manner that minimizes the development of thermally induced stresses while maintaining structural alignment and resisting high aerodynamic loads. Typical of the support mechanisms are the preload brackets used in the fan shroud system and the Watts linkage used to support the upstream nacelle. The design of these mechanisms along with the basic design requirements and the constraints imposed by the tunnel system are discussed.
Wide field imaging problems in radio astronomy
NASA Astrophysics Data System (ADS)
Cornwell, T. J.; Golap, K.; Bhatnagar, S.
2005-03-01
The new generation of synthesis radio telescopes now being proposed, designed, and constructed face substantial problems in making images over wide fields of view. Such observations are required either to achieve the full sensitivity limit in crowded fields or for surveys. The Square Kilometre Array (SKA Consortium, Tech. Rep., 2004), now being developed by an international consortium of 15 countries, will require advances well beyond the current state of the art. We review the theory of synthesis radio telescopes for large fields of view. We describe a new algorithm, W projection, for correcting the non-coplanar baselines aberration. This algorithm has improved performance over those previously used (typically an order of magnitude in speed). Despite the advent of W projection, the computing hardware required for SKA wide field imaging is estimated to cost up to $500M (2015 dollars). This is about half the target cost of the SKA. Reconfigurable computing is one way in which the costs can be decreased dramatically.
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Microgravity fluid management requirements of advanced solar dynamic power systems
NASA Technical Reports Server (NTRS)
Migra, Robert P.
1987-01-01
The advanced solar dynamic system (ASDS) program is aimed at developing the technology for highly efficient, lightweight space power systems. The approach is to evaluate Stirling, Brayton and liquid metal Rankine power conversion systems (PCS) over the temperature range of 1025 to 1400K, identify the critical technologies and develop these technologies. Microgravity fluid management technology is required in several areas of this program, namely, thermal energy storage (TES), heat pipe applications and liquid metal, two phase flow Rankine systems. Utilization of the heat of fusion of phase change materials offers potential for smaller, lighter TES systems. The candidate TES materials exhibit large volume change with the phase change. The heat pipe is an energy dense heat transfer device. A high temperature application may transfer heat from the solar receiver to the PCS working fluid and/or TES. A low temperature application may transfer waste heat from the PCS to the radiator. The liquid metal Rankine PCS requires management of the boiling/condensing process typical of two phase flow systems.
Evaluation of cable tension sensors of FAST reflector from the perspective of EMI
NASA Astrophysics Data System (ADS)
Zhu, Ming; Wang, Qiming; Egan, Dennis; Wu, Mingchang; Sun, Xiao
2016-06-01
The active reflector of FAST (five-hundred-meter aperture spherical radio telescope) is supported by a ring beam and a cable-net structure, in which nodes are actively controlled to form series of real-time paraboloids. To ensure the security and stability of the supporting structure, tension must be monitored for some typical cables. Considering the stringent requirements in accuracy and long-term stability, magnetic flux sensor, vibrating wire strain gauge and fiber bragg grating strain gauge are screened for the cable tension monitoring of the supporting cable-net. Specifically, receivers of radio telescopes have strict restriction on electro magnetic interference (EMI) or radio frequency interference (RFI). These three types of sensors are evaluated from the view of EMI/RFI. Firstly, these fundamentals are theoretically analyzed. Secondly, typical sensor signals are collected in the time and analyzed in the frequency domain, which shows the characteristic in the frequency domain. Finally, typical sensors are tested in an anechoic chamber to get the EMI levels. Theoretical analysis shows that Fiber Bragg Grating strain gauge itself will not lead to EMI/RFI. According to GJB151A, frequency domain analysis and test results show that for the vibrating wire strain gauge and magnetic flux sensor themselves, testable EMI/RFI levels are typically below the background noise of the anechoic chamber. FAST finally choses these three sensors as the monitoring sensors of its cable tension. The proposed study is also a reference to the monitoring equipment selection of other radio telescopes and large structures.
Frequency Control of Single Quantum Emitters in Integrated Photonic Circuits
NASA Astrophysics Data System (ADS)
Schmidgall, Emma R.; Chakravarthi, Srivatsa; Gould, Michael; Christen, Ian R.; Hestroffer, Karine; Hatami, Fariba; Fu, Kai-Mei C.
2018-02-01
Generating entangled graph states of qubits requires high entanglement rates, with efficient detection of multiple indistinguishable photons from separate qubits. Integrating defect-based qubits into photonic devices results in an enhanced photon collection efficiency, however, typically at the cost of a reduced defect emission energy homogeneity. Here, we demonstrate that the reduction in defect homogeneity in an integrated device can be partially offset by electric field tuning. Using photonic device-coupled implanted nitrogen vacancy (NV) centers in a GaP-on-diamond platform, we demonstrate large field-dependent tuning ranges and partial stabilization of defect emission energies. These results address some of the challenges of chip-scale entanglement generation.
GEH-4-42, 47; Hot pressed, I and E cooled fuel element irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neidner, R.
1959-11-02
In our continual effort to improve the present fuel elements which are irradiated in the numerous Hanford reactors, we have made what we believe to be a significant improvement in the hot pressing process for jacketing uranium fuel slugs. We are proposing a large scale evaluation testing program in the Hanford reactors but need the vital and basic information on the operating characteristics of this type slug under known and controlled operating conditions. We, therefore, have prepared two typical fuel slugs and will want them irradiated to about 1000 MWD/T exposure (this will require about four to five total cycles).
Functional fixedness in a technologically sparse culture.
German, Tim P; Barrett, H Clark
2005-01-01
Problem solving can be inefficient when the solution requires subjects to generate an atypical function for an object and the object's typical function has been primed. Subjects become "fixed" on the design function of the object, and problem solving suffers relative to control conditions in which the object's function is not demonstrated. In the current study, such functional fixedness was demonstrated in a sample of adolescents (mean age of 16 years) among the Shuar of Ecuadorian Amazonia, whose technologically sparse culture provides limited access to large numbers of artifacts with highly specialized functions. This result suggests that design function may universally be the core property of artifact concepts in human semantic memory.
Parallel plate radiofrequency ion thruster
NASA Technical Reports Server (NTRS)
Nakanishi, S.
1982-01-01
An 8-cm-diam. argon ion thruster is described. It is operated by applying 100 to 160 Mhz rf power across a thin plasma volume in a strongly divergent static magnetic field. No cathode or electron emitter is required to sustain a continuous wave plasma discharge over a broad range of propellant gas flow. Preliminary results indicate that a large fraction of the incident power is being reflected by impedance mismatching in the coupling structure. Resonance effects due to plasma thickness, magnetic field strength, and distribution are presented. Typical discharge losses obtained to date are 500 to 600 W per beam ampere at extracted beam currents up to 60 mA.
Frequency Control of Single Quantum Emitters in Integrated Photonic Circuits.
Schmidgall, Emma R; Chakravarthi, Srivatsa; Gould, Michael; Christen, Ian R; Hestroffer, Karine; Hatami, Fariba; Fu, Kai-Mei C
2018-02-14
Generating entangled graph states of qubits requires high entanglement rates with efficient detection of multiple indistinguishable photons from separate qubits. Integrating defect-based qubits into photonic devices results in an enhanced photon collection efficiency, however, typically at the cost of a reduced defect emission energy homogeneity. Here, we demonstrate that the reduction in defect homogeneity in an integrated device can be partially offset by electric field tuning. Using photonic device-coupled implanted nitrogen vacancy (NV) centers in a GaP-on-diamond platform, we demonstrate large field-dependent tuning ranges and partial stabilization of defect emission energies. These results address some of the challenges of chip-scale entanglement generation.
NASA Astrophysics Data System (ADS)
Stuart, M. R.; Pinsky, M. L.
2016-02-01
The ability to use DNA to identify individuals and their offspring has begun to revolutionize marine ecology. However, genetic mark-recapture and parentage studies typically require large numbers of individuals and associated high genotyping costs. Here, we describe a rapid and relatively low-cost protocol for genotyping non-model organisms at thousands of Single Nucleotide Polymorphisms (SNPs) using massively parallel sequencing. We apply the approach to a population of yellowtail clownfish, Amphiprion clarkii, to detect genetic mark-recaptures and parent-offspring relationships. We test multiple bioinformatic approaches and describe how this method could be applied to a wide variety of marine organisms.
Health impact assessment of industrial development projects: a spatio-temporal visualization.
Winkler, Mirko S; Krieger, Gary R; Divall, Mark J; Singer, Burton H; Utzinger, Jürg
2012-05-01
Development and implementation of large-scale industrial projects in complex eco-epidemiological settings typically require combined environmental, social and health impact assessments. We present a generic, spatio-temporal health impact assessment (HIA) visualization, which can be readily adapted to specific projects and key stakeholders, including poorly literate communities that might be affected by consequences of a project. We illustrate how the occurrence of a variety of complex events can be utilized for stakeholder communication, awareness creation, interactive learning as well as formulating HIA research and implementation questions. Methodological features are highlighted in the context of an iron ore development in a rural part of Africa.
Optimizing liquid effluent monitoring at a large nuclear complex.
Chou, Charissa J; Barnett, D Brent; Johnson, Vernon G; Olson, Phil M
2003-12-01
Effluent monitoring typically requires a large number of analytes and samples during the initial or startup phase of a facility. Once a baseline is established, the analyte list and sampling frequency may be reduced. Although there is a large body of literature relevant to the initial design, few, if any, published papers exist on updating established effluent monitoring programs. This paper statistically evaluates four years of baseline data to optimize the liquid effluent monitoring efficiency of a centralized waste treatment and disposal facility at a large defense nuclear complex. Specific objectives were to: (1) assess temporal variability in analyte concentrations, (2) determine operational factors contributing to waste stream variability, (3) assess the probability of exceeding permit limits, and (4) streamline the sampling and analysis regime. Results indicated that the probability of exceeding permit limits was one in a million under normal facility operating conditions, sampling frequency could be reduced, and several analytes could be eliminated. Furthermore, indicators such as gross alpha and gross beta measurements could be used in lieu of more expensive specific isotopic analyses (radium, cesium-137, and strontium-90) for routine monitoring. Study results were used by the state regulatory agency to modify monitoring requirements for a new discharge permit, resulting in an annual cost savings of US dollars 223,000. This case study demonstrates that statistical evaluation of effluent contaminant variability coupled with process knowledge can help plant managers and regulators streamline analyte lists and sampling frequencies based on detection history and environmental risk.
Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms
NASA Astrophysics Data System (ADS)
Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.
2016-10-01
The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.
Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms
Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.
2016-01-01
The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification. PMID:27762292
Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms.
Mirkovic, Djordje; Stepanian, Phillip M; Kelly, Jeffrey F; Chilson, Phillip B
2016-10-20
The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.
Validating the Use of Deep Learning Neural Networks for Correction of Large Hydrometric Datasets
NASA Astrophysics Data System (ADS)
Frazier, N.; Ogden, F. L.; Regina, J. A.; Cheng, Y.
2017-12-01
Collection and validation of Earth systems data can be time consuming and labor intensive. In particular, high resolution hydrometric data, including rainfall and streamflow measurements, are difficult to obtain due to a multitude of complicating factors. Measurement equipment is subject to clogs, environmental disturbances, and sensor drift. Manual intervention is typically required to identify, correct, and validate these data. Weirs can become clogged and the pressure transducer may float or drift over time. We typically employ a graphical tool called Time Series Editor to manually remove clogs and sensor drift from the data. However, this process is highly subjective and requires hydrological expertise. Two different people may produce two different data sets. To use this data for scientific discovery and model validation, a more consistent method is needed to processes this field data. Deep learning neural networks have proved to be excellent mechanisms for recognizing patterns in data. We explore the use of Recurrent Neural Networks (RNN) to capture the patterns in the data over time using various gating mechanisms (LSTM and GRU), network architectures, and hyper-parameters to build an automated data correction model. We also explore the required amount of manually corrected training data required to train the network for reasonable accuracy. The benefits of this approach are that the time to process a data set is significantly reduced, and the results are 100% reproducible after training is complete. Additionally, we train the RNN and calibrate a physically-based hydrological model against the same portion of data. Both the RNN and the model are applied to the remaining data using a split-sample methodology. Performance of the machine learning is evaluated for plausibility by comparing with the output of the hydrological model, and this analysis identifies potential periods where additional investigation is warranted.
Quantifying induced effects of subsurface renewable energy storage
NASA Astrophysics Data System (ADS)
Bauer, Sebastian; Beyer, Christof; Pfeiffer, Tilmann; Boockmeyer, Anke; Popp, Steffi; Delfs, Jens-Olaf; Wang, Bo; Li, Dedong; Dethlefsen, Frank; Dahmke, Andreas
2015-04-01
New methods and technologies for energy storage are required for the transition to renewable energy sources. Subsurface energy storage systems such as salt caverns or porous formations offer the possibility of hosting large amounts of energy or substance. When employing these systems, an adequate system and process understanding is required in order to assess the feasibility of the individual storage option at the respective site and to predict the complex and interacting effects induced. This understanding is the basis for assessing the potential as well as the risks connected with a sustainable usage of these storage options, especially when considering possible mutual influences. For achieving this aim, in this work synthetic scenarios for the use of the geological underground as an energy storage system are developed and parameterized. The scenarios are designed to represent typical conditions in North Germany. The types of subsurface use investigated here include gas storage and heat storage in porous formations. The scenarios are numerically simulated and interpreted with regard to risk analysis and effect forecasting. For this, the numerical simulators Eclipse and OpenGeoSys are used. The latter is enhanced to include the required coupled hydraulic, thermal, geomechanical and geochemical processes. Using the simulated and interpreted scenarios, the induced effects are quantified individually and monitoring concepts for observing these effects are derived. This presentation will detail the general investigation concept used and analyze the parameter availability for this type of model applications. Then the process implementation and numerical methods required and applied for simulating the induced effects of subsurface storage are detailed and explained. Application examples show the developed methods and quantify induced effects and storage sizes for the typical settings parameterized. This work is part of the ANGUS+ project, funded by the German Ministry of Education and Research (BMBF).
Application of Semi Active Control Techniques to the Damping Suppression Problem of Solar Sail Booms
NASA Technical Reports Server (NTRS)
Adetona, O.; Keel, L. H.; Whorton, M. S.
2007-01-01
Solar sails provide a propellant free form for space propulsion. These are large flat surfaces that generate thrust when they are impacted by light. When attached to a space vehicle, the thrust generated can propel the space vehicle to great distances at significant speeds. For optimal performance the sail must be kept from excessive vibration. Active control techniques can provide the best performance. However, they require an external power-source that may create significant parasitic mass to the solar sail. However, solar sails require low mass for optimal performance. Secondly, active control techniques typically require a good system model to ensure stability and performance. However, the accuracy of solar sail models validated on earth for a space environment is questionable. An alternative approach is passive vibration techniques. These do not require an external power supply, and do not destabilize the system. A third alternative is referred to as semi-active control. This approach tries to get the best of both active and passive control, while avoiding their pitfalls. In semi-active control, an active control law is designed for the system, and passive control techniques are used to implement it. As a result, no external power supply is needed so the system is not destabilize-able. Though it typically underperforms active control techniques, it has been shown to out-perform passive control approaches and can be unobtrusively installed on a solar sail boom. Motivated by this, the objective of this research is to study the suitability of a Piezoelectric (PZT) patch actuator/sensor based semi-active control system for the vibration suppression problem of solar sail booms. Accordingly, we develop a suitable mathematical and computer model for such studies and demonstrate the capabilities of the proposed approach with computer simulations.
Clinical test responses to different orthoptic exercise regimes in typical young adults.
Horwood, Anna; Toor, Sonia
2014-03-01
The relative efficiency of different eye exercise regimes is unclear, and in particular the influences of practice, placebo and the amount of effort required are rarely considered. This study measured conventional clinical measures following different regimes in typical young adults. A total of 156 asymptomatic young adults were directed to carry out eye exercises three times daily for 2 weeks. Exercises were directed at improving blur responses (accommodation), disparity responses (convergence), both in a naturalistic relationship, convergence in excess of accommodation, accommodation in excess of convergence, and a placebo regime. They were compared to two control groups, neither of which were given exercises, but the second of which were asked to make maximum effort during the second testing. Instruction set and participant effort were more effective than many exercises. Convergence exercises independent of accommodation were the most effective treatment, followed by accommodation exercises, and both regimes resulted in changes in both vergence and accommodation test responses. Exercises targeting convergence and accommodation working together were less effective than those where they were separated. Accommodation measures were prone to large instruction/effort effects and monocular accommodation facility was subject to large practice effects. Separating convergence and accommodation exercises seemed more effective than exercising both systems concurrently and suggests that stimulation of accommodation and convergence may act in an additive fashion to aid responses. Instruction/effort effects are large and should be carefully controlled if claims for the efficacy of any exercise regime are to be made. © 2014 The Authors Ophthalmic & Physiological Optics published by John Wiley & Sons Ltd on behalf of The College of Optometrists.
Resonator reset in circuit QED by optimal control for large open quantum systems
NASA Astrophysics Data System (ADS)
Boutin, Samuel; Andersen, Christian Kraglund; Venkatraman, Jayameenakshi; Ferris, Andrew J.; Blais, Alexandre
2017-10-01
We study an implementation of the open GRAPE (gradient ascent pulse engineering) algorithm well suited for large open quantum systems. While typical implementations of optimal control algorithms for open quantum systems rely on explicit matrix exponential calculations, our implementation avoids these operations, leading to a polynomial speedup of the open GRAPE algorithm in cases of interest. This speedup, as well as the reduced memory requirements of our implementation, are illustrated by comparison to a standard implementation of open GRAPE. As a practical example, we apply this open-system optimization method to active reset of a readout resonator in circuit QED. In this problem, the shape of a microwave pulse is optimized such as to empty the cavity from measurement photons as fast as possible. Using our open GRAPE implementation, we obtain pulse shapes, leading to a reset time over 4 times faster than passive reset.
Ultrasensitive multiplex optical quantification of bacteria in large samples of biofluids
Pazos-Perez, Nicolas; Pazos, Elena; Catala, Carme; Mir-Simon, Bernat; Gómez-de Pedro, Sara; Sagales, Juan; Villanueva, Carlos; Vila, Jordi; Soriano, Alex; García de Abajo, F. Javier; Alvarez-Puebla, Ramon A.
2016-01-01
Efficient treatments in bacterial infections require the fast and accurate recognition of pathogens, with concentrations as low as one per milliliter in the case of septicemia. Detecting and quantifying bacteria in such low concentrations is challenging and typically demands cultures of large samples of blood (~1 milliliter) extending over 24–72 hours. This delay seriously compromises the health of patients. Here we demonstrate a fast microorganism optical detection system for the exhaustive identification and quantification of pathogens in volumes of biofluids with clinical relevance (~1 milliliter) in minutes. We drive each type of bacteria to accumulate antibody functionalized SERS-labelled silver nanoparticles. Particle aggregation on the bacteria membranes renders dense arrays of inter-particle gaps in which the Raman signal is exponentially amplified by several orders of magnitude relative to the dispersed particles. This enables a multiplex identification of the microorganisms through the molecule-specific spectral fingerprints. PMID:27364357
Fattebert, Jean-Luc; Lau, Edmond Y.; Bennion, Brian J.; ...
2015-10-22
Enzymes are complicated solvated systems that typically require many atoms to simulate their function with any degree of accuracy. We have recently developed numerical techniques for large scale First-Principles molecular dynamics simulations and applied them to study the enzymatic reaction catalyzed by acetylcholinesterase. We carried out Density functional theory calculations for a quantum mechanical (QM) sub- system consisting of 612 atoms with an O(N) complexity finite-difference approach. The QM sub-system is embedded inside an external potential field representing the electrostatic effect due to the environment. We obtained finite temperature sampling by First-Principles molecular dynamics for the acylation reaction of acetylcholinemore » catalyzed by acetylcholinesterase. Our calculations shows two energies barriers along the reaction coordinate for the enzyme catalyzed acylation of acetylcholine. In conclusion, the second barrier (8.5 kcal/mole) is rate-limiting for the acylation reaction and in good agreement with experiment.« less
Electric Propulsion Laboratory Vacuum Chamber
1964-06-21
Engineer Paul Reader and his colleagues take environmental measurements during testing of a 20-inch diameter ion engine in a vacuum tank at the Electric Propulsion Laboratory (EPL). Researchers at the Lewis Research Center were investigating the use of a permanent-magnet circuit to create the magnetic field required power electron bombardment ion engines. Typical ion engines use a solenoid coil to create this magnetic field. It was thought that the substitution of a permanent magnet would create a comparable magnetic field with a lower weight. Testing of the magnet system in the EPL vacuum tanks revealed no significant operational problems. Reader found the weight of the two systems was similar, but that the thruster’s efficiency increased with the magnet. The EPL contained a series of large vacuum tanks that could be used to simulate conditions in space. Large vacuum pumps reduced the internal air pressure, and a refrigeration system created the cryogenic temperatures found in space.
Physical Retrieval of Surface Emissivity Spectrum from Hyperspectral Infrared Radiances
NASA Technical Reports Server (NTRS)
Li, Jun; Weisz, Elisabeth; Zhou, Daniel K.
2007-01-01
Retrieval of temperature, moisture profiles and surface skin temperature from hyperspectral infrared (IR) radiances requires spectral information about the surface emissivity. Using constant or inaccurate surface emissivities typically results in large retrieval errors, particularly over semi-arid or arid areas where the variation in emissivity spectrum is large both spectrally and spatially. In this study, a physically based algorithm has been developed to retrieve a hyperspectral IR emissivity spectrum simultaneously with the temperature and moisture profiles, as well as the surface skin temperature. To make the solution stable and efficient, the hyperspectral emissivity spectrum is represented by eigenvectors, derived from the laboratory measured hyperspectral emissivity database, in the retrieval process. Experience with AIRS (Atmospheric InfraRed Sounder) radiances shows that a simultaneous retrieval of the emissivity spectrum and the sounding improves the surface skin temperature as well as temperature and moisture profiles, particularly in the near surface layer.
Application of a scattered-light radiometric power meter.
Caron, James N; DiComo, Gregory P; Ting, Antonio C; Fischer, Richard P
2011-04-01
The power measurement of high-power continuous-wave laser beams typically calls for the use of water-cooled thermopile power meters. Large thermopile meters have slow response times that can prove insufficient to conduct certain tests, such as determining the influence of atmospheric turbulence on transmitted beam power. To achieve faster response times, we calibrated a digital camera to measure the power level as the optical beam is projected onto a white surface. This scattered-light radiometric power meter saves the expense of purchasing a large area power meter and the required water cooling. In addition, the system can report the power distribution, changes in the position, and the spot size of the beam. This paper presents the theory of the scattered-light radiometric power meter and demonstrates its use during a field test at a 2.2 km optical range. © 2011 American Institute of Physics
Fundamental challenges to methane recovery from gas hydrates
Servio, P.; Eaton, M.W.; Mahajan, D.; Winters, W.J.
2005-01-01
The fundamental challenges, the location, magnitude, and feasibility of recovery, which must be addressed to recover methane from dispersed hydrate sources, are presented. To induce dissociation of gas hydrate prior to methane recovery, two potential methods are typically considered. Because thermal stimulation requires a large energy input, it is less economically feasible than depressurization. The new data will allow the study of the effect of pressure, temperature, diffusion, porosity, tortuosity, composition of gas and water, and porous media on gas-hydrate production. These data also will allow one to improve existing models related to the stability and dissociation of sea floor hydrates. The reproducible kinetic data from the planned runs together with sediment properties will aid in developing a process to economically recover methane from a potential untapped hydrate source. The availability of plentiful methane will allow economical and large-scale production of methane-derived clean fuels to help avert future energy crises.
Graphene growth with ‘no’ feedstock
NASA Astrophysics Data System (ADS)
Qing, Fangzhu; Jia, Ruitao; Li, Bao-Wen; Liu, Chunlin; Li, Congzhou; Peng, Bo; Deng, Longjiang; Zhang, Wanli; Li, Yanrong; Ruoff, Rodney S.; Li, Xuesong
2017-06-01
Synthesis of graphene by chemical vapor deposition (CVD) from hydrocarbons on Cu foil substrates can yield high quality and large area graphene films. In a typical CVD process, a hydrocarbon in the gas phase is introduced for graphene growth and hydrogen is usually required to achieve high quality graphene. We have found that in a low pressure CVD system equipped with an oil mechanical vacuum pump located downstream, graphene can be grown without deliberate introduction of a carbon feedstock but with only trace amounts of C present in the system, the origin of which we attribute to the vapor of the pump oil. This finding may help to rationalize the differences in graphene growth reported by different research groups. It should also help to gain an in-depth understanding of graphene growth mechanisms with the aim to improve the reproducibility and structure control in graphene synthesis, e.g. the formation of large area single crystal graphene and uniform bilayer graphene.
On the analytical modeling of the nonlinear vibrations of pretensioned space structures
NASA Technical Reports Server (NTRS)
Housner, J. M.; Belvin, W. K.
1983-01-01
Pretensioned structures are receiving considerable attention as candidate large space structures. A typical example is a hoop-column antenna. The large number of preloaded members requires efficient analytical methods for concept validation and design. Validation through analyses is especially important since ground testing may be limited due to gravity effects and structural size. The present investigation has the objective to present an examination of the analytical modeling of pretensioned members undergoing nonlinear vibrations. Two approximate nonlinear analysis are developed to model general structural arrangements which include beam-columns and pretensioned cables attached to a common nucleus, such as may occur at a joint of a pretensioned structure. Attention is given to structures undergoing nonlinear steady-state oscillations due to sinusoidal excitation forces. Three analyses, linear, quasi-linear, and nonlinear are conducted and applied to study the response of a relatively simple cable stiffened structure.
Facile fabrication and electrical investigations of nanostructured p-Si/n-TiO2 hetero-junction diode
NASA Astrophysics Data System (ADS)
Kumar, Arvind; Mondal, Sandip; Rao, K. S. R. Koteswara
2018-05-01
In this work, we have fabricated the nanostructured p-Si/n-TiO2 hetero-junction diode by using a facile spin-coating method. The XRD analysis suggests the presence of well crystalline anatase TiO2 film on Si with small grain size (˜16 nm). We have drawn the band alignment using Anderson model to understand the electrical transport across the junction. The current-voltage (J-V) characteristics analysis reveals the good rectification ratio (103 at ± 3 V) and slightly higher ideality factor (4.7) of our device. The interface states are responsible for the large ideality factor as Si/TiO2 form a dissimilar interface and possess a large number of dangling bonds. The study reveals the promises to be used Si/TiO2 diode as an alternative to the traditional p-n homo-junction diode, which typically require high budget.
Delea, Thomas E; Hagiwara, May; Thomas, Simu K; Baladi, Jean-Francois; Phatak, Pradyumna D; Coates, Thomas D
2008-04-01
Deferoxamine mesylate (DFO) reduces morbidity and mortality associated with transfusional iron overload. Data on the utilization and costs of care among U.S. patients receiving DFO in typical clinical practice are limited however. This was a retrospective study using a large U.S. health insurance claims database spanning 1/97-12/04 and representing 40 million members in >70 health plans. Study subjects (n = 145 total, 106 sickle cell disease [SCD], 39 thalassemia) included members with a diagnosis of thalassemia or SCD, one or more transfusions (whole blood or red blood cells), and one or more claims for DFO. Mean transfusion episodes were 12 per year. Estimated mean DFO use was 307 g/year. Central venous access devices were required by 20% of patients. Cardiac disease was observed in 16% of patients. Mean total medical costs were $59,233 per year including $10,899 for DFO and $8,722 for administration of chelation therapy. In multivariate analyses, potential complications of iron overload were associated with significantly higher medical care costs. In typical clinical practice, use of DFO in patients with thalassemia and SCD receiving transfusions is low. Administration costs represent a large proportion of the cost of chelation therapy. Potential complications of iron overload are associated with increased costs. (c) 2007 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Intrator, T.; Zhang, S. Y.; Degnan, J. H.; Furno, I.; Grabowski, C.; Hsu, S. C.; Ruden, E. L.; Sanchez, P. G.; Taccetti, J. M.; Tuszewski, M.; Waganaar, W. J.; Wurden, G. A.
2004-05-01
Magnetized target fusion (MTF) is a potentially low cost path to fusion, intermediate in plasma regime between magnetic and inertial fusion energy. It requires compression of a magnetized target plasma and consequent heating to fusion relevant conditions inside a converging flux conserver. To demonstrate the physics basis for MTF, a field reversed configuration (FRC) target plasma has been chosen that will ultimately be compressed within an imploding metal liner. The required FRC will need large density, and this regime is being explored by the FRX-L (FRC-Liner) experiment. All theta pinch formed FRCs have some shock heating during formation, but FRX-L depends further on large ohmic heating from magnetic flux annihilation to heat the high density (2-5×1022m-3), plasma to a temperature of Te+Ti≈500 eV. At the field null, anomalous resistivity is typically invoked to characterize the resistive like flux dissipation process. The first resistivity estimate for a high density collisional FRC is shown here. The flux dissipation process is both a key issue for MTF and an important underlying physics question.
Isosurface Extraction in Time-Varying Fields Using a Temporal Hierarchical Index Tree
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
Many high-performance isosurface extraction algorithms have been proposed in the past several years as a result of intensive research efforts. When applying these algorithms to large-scale time-varying fields, the storage overhead incurred from storing the search index often becomes overwhelming. this paper proposes an algorithm for locating isosurface cells in time-varying fields. We devise a new data structure, called Temporal Hierarchical Index Tree, which utilizes the temporal coherence that exists in a time-varying field and adoptively coalesces the cells' extreme values over time; the resulting extreme values are then used to create the isosurface cell search index. For a typical time-varying scalar data set, not only does this temporal hierarchical index tree require much less storage space, but also the amount of I/O required to access the indices from the disk at different time steps is substantially reduced. We illustrate the utility and speed of our algorithm with data from several large-scale time-varying CID simulations. Our algorithm can achieve more than 80% of disk-space savings when compared with the existing techniques, while the isosurface extraction time is nearly optimal.
NASA Astrophysics Data System (ADS)
Burritt, Rosemary; Francois, Elizabeth; Windler, Gary; Chavez, David
2017-06-01
Diaminoazoxyfurazan (DAAF) has many of the safety characteristics of an insensitive high explosive (IHE): it is extremely insensitive to impact and friction and is comparable to triaminotrinitrobezene (TATB) in this way. Conversely, it demonstrates many performance characteristics of a Conventional High Explosive (CHE). DAAF has a small failure diameter of about 1.25 mm and can be sensitive to shock under the right conditions. Large particle sized DAAF will not initiate in a typical exploding foil initiator (EFI) configuration but smaller particle sizes will. Large particle sized DAAF, of 40 μm, was crash precipitated and ball milled into six distinct samples and pressed into pellets with a density of 1.60 g/cc (91% TMD). To investigate the effect of particle size and surface area on the direct initiation on DAAF multiple threshold tests were preformed on each sample of DAAF in different EFI configurations, which varied in flyer thickness and/or bridge size. Comparative tests were performed examining threshold voltage and correlated to Photon Doppler Velocimetry (PDV) results. The samples with larger particle sizes and surface area required more energy to initiate while the smaller particle sizes required less energy and could be initiated with smaller diameter flyers.
Tower Based Load Measurements for Individual Pitch Control and Tower Damping of Wind Turbines
NASA Astrophysics Data System (ADS)
Kumar, A. A.; Hugues-Salas, O.; Savini, B.; Keogh, W.
2016-09-01
The cost of IPC has hindered adoption outside of Europe despite significant loading advantages for large wind turbines. In this work we presented a method for applying individual pitch control (including for higher-harmonics) using tower-top strain gauge feedback instead of blade-root strain gauge feedback. Tower-top strain gauges offer hardware savings of approximately 50% in addition to the possibility of easier access for maintenance and installation and requiring a less specialised skill-set than that required for applying strain gauges to composite blade roots. A further advantage is the possibility of using the same tower-top sensor array for tower damping control. This method is made possible by including a second order IPC loop in addition to the tower damping loop to reduce the typically dominating 3P content in tower-top load measurements. High-fidelity Bladed simulations show that the resulting turbine spectral characteristics from tower-top feedback IPC and from the combination of tower-top IPC and damping loops largely match those of blade-root feedback IPC and nacelle- velocity feedback damping. Lifetime weighted fatigue analysis shows that the methods allows load reductions within 2.5% of traditional methods.
Toward a 3D video format for auto-stereoscopic displays
NASA Astrophysics Data System (ADS)
Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha
2008-08-01
There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.
Graphite Nanoreinforcements for Aerospace Nanocomposites
NASA Technical Reports Server (NTRS)
Drzal, Lawrence T.
2005-01-01
New advances in the reinforcement of polymer matrix composite materials are critical for advancement of the aerospace industry. Reinforcements are required to have good mechanical and thermal properties, large aspect ratio, excellent adhesion to the matrix, and cost effectiveness. To fulfill the requirements, nanocomposites in which the matrix is filled with nanoscopic reinforcing phases having dimensions typically in the range of 1nm to 100 nm show considerably higher strength and modulus with far lower reinforcement content than their conventional counterparts. Graphite is a layered material whose layers have dimensions in the nanometer range and are held together by weak Van der Waals forces. Once these layers are exfoliated and dispersed in a polymer matrix as nano platelets, they have large aspect ratios. Graphite has an elastic modulus that is equal to the stiffest carbon fiber and 10-15 times that of other inorganic reinforcements, and it is also electrically and thermally conductive. If the appropriate surface treatment can be found for graphite, its exfoliation and dispersion in a polymer matrix will result in a composite with excellent mechanical properties, superior thermal stability, and very good electrical and thermal properties at very low reinforcement loadings.
Control of large space structures
NASA Technical Reports Server (NTRS)
Gran, R.; Rossi, M.; Moyer, H. G.; Austin, F.
1979-01-01
The control of large space structures was studied to determine what, if any, limitations are imposed on the size of spacecraft which may be controlled using current control system design technology. Using a typical structure in the 35 to 70 meter size category, a control system design that used actuators that are currently available was designed. The amount of control power required to maintain the vehicle in a stabilized gravity gradient pointing orientation that also damped various structural motions was determined. The moment of inertia and mass properties of this structure were varied to verify that stability and performance were maintained. The study concludes that the structure's size is required to change by at least a factor of two before any stability problems arise. The stability margin that is lost is due to the scaling of the gravity gradient torques (the rigid body control) and as such can easily be corrected by changing the control gains associated with the rigid body control. A secondary conclusion from the study is that the control design that accommodates the structural motions (to damp them) is a little more sensitive than the design that works on attitude control of the rigid body only.
BlazeDEM3D-GPU A Large Scale DEM simulation code for GPUs
NASA Astrophysics Data System (ADS)
Govender, Nicolin; Wilke, Daniel; Pizette, Patrick; Khinast, Johannes
2017-06-01
Accurately predicting the dynamics of particulate materials is of importance to numerous scientific and industrial areas with applications ranging across particle scales from powder flow to ore crushing. Computational discrete element simulations is a viable option to aid in the understanding of particulate dynamics and design of devices such as mixers, silos and ball mills, as laboratory scale tests comes at a significant cost. However, the computational time required to simulate an industrial scale simulation which consists of tens of millions of particles can take months to complete on large CPU clusters, making the Discrete Element Method (DEM) unfeasible for industrial applications. Simulations are therefore typically restricted to tens of thousands of particles with highly detailed particle shapes or a few million of particles with often oversimplified particle shapes. However, a number of applications require accurate representation of the particle shape to capture the macroscopic behaviour of the particulate system. In this paper we give an overview of the recent extensions to the open source GPU based DEM code, BlazeDEM3D-GPU, that can simulate millions of polyhedra and tens of millions of spheres on a desktop computer with a single or multiple GPUs.
NASA Technical Reports Server (NTRS)
Xue, Min; Rios, Joseph
2017-01-01
Small Unmanned Aerial Vehicles (sUAVs), typically 55 lbs and below, are envisioned to play a major role in surveilling critical assets, collecting important information, and delivering goods. Large scale small UAV operations are expected to happen in low altitude airspace in the near future. Many static and dynamic constraints exist in low altitude airspace because of manned aircraft or helicopter activities, various wind conditions, restricted airspace, terrain and man-made buildings, and conflict-avoidance among sUAVs. High sensitivity and high maneuverability are unique characteristics of sUAVs that bring challenges to effective system evaluations and mandate such a simulation platform different from existing simulations that were built for manned air traffic system and large unmanned fixed aircraft. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative focuses on enabling safe and efficient sUAV operations in the future. In order to help define requirements and policies for a safe and efficient UTM system to accommodate a large amount of sUAV operations, it is necessary to develop a fast-time simulation platform that can effectively evaluate requirements, policies, and concepts in a close-to-reality environment. This work analyzed the impacts of some key factors including aforementioned sUAV's characteristics and demonstrated the importance of these factors in a successful UTM fast-time simulation platform.
Image Harvest: an open-source platform for high-throughput plant image processing and analysis
Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal
2016-01-01
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917
NASA Technical Reports Server (NTRS)
Xue, Min; Rios, Joseph
2017-01-01
Small Unmanned Aerial Vehicles (sUAVs), typically 55 lbs and below, are envisioned to play a major role in surveilling critical assets, collecting important information, and delivering goods. Large scale small UAV operations are expected to happen in low altitude airspace in the near future. Many static and dynamic constraints exist in low altitude airspace because of manned aircraft or helicopter activities, various wind conditions, restricted airspace, terrain and man-made buildings, and conflict-avoidance among sUAVs. High sensitivity and high maneuverability are unique characteristics of sUAVs that bring challenges to effective system evaluations and mandate such a simulation platform different from existing simulations that were built for manned air traffic system and large unmanned fixed aircraft. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative focuses on enabling safe and efficient sUAV operations in the future. In order to help define requirements and policies for a safe and efficient UTM system to accommodate a large amount of sUAV operations, it is necessary to develop a fast-time simulation platform that can effectively evaluate requirements, policies, and concepts in a close-to-reality environment. This work analyzed the impacts of some key factors including aforementioned sUAV's characteristics and demonstrated the importance of these factors in a successful UTM fast-time simulation platform.
Centrifuge: rapid and sensitive classification of metagenomic sequences
Song, Li; Breitwieser, Florian P.
2016-01-01
Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. PMID:27852649
Low Temperature Photoluminescence Characterization of Orbitally Grown CdZnTe
NASA Technical Reports Server (NTRS)
Ritter, Timothy M.; Larson, D. J.
1998-01-01
The II-VI ternary alloy CdZnTe is a technologically important material because of its use as a lattice matched substrate for HgCdTe based devices. The increasingly stringent requirements on performance that must be met by such large area infrared detectors also necessitates a higher quality substrate. Such substrate material is typically grown using the Bridgman technique. Due to the nature of bulk semiconductor growth, gravitationally dependent phenomena can adversely affect crystalline quality. The most direct way to alleviate this problem is by crystal growth in a reduced gravity environment. Since it requires hours, even days, to grow a high quality crystal, an orbiting space shuttle or space station provides a superb platform on which to conduct such research. For well over ten years NASA has been studying the effects of microgravity semiconductor crystal growth. This paper reports the results of photoluminescence characterization performed on an arbitrary grown CdZnTe bulk crystal.
Manbeck, Gerald F.; Fujita, Etsuko
2015-03-30
This review summarizes research on the electrochemical and photochemical reduction of CO₂ using a variety of iron and cobalt porphyrins, phthalocyanines, and related complexes. Metalloporphyrins and metallophthalocyanines are visible light absorbers with extremely large extinction coefficients. However, yields of photochemically-generated active catalysts for CO₂ reduction are typically low owing to the requirement of a second photoinduced electron. This requirement is not relevant to the case of electrochemical CO₂ reduction. Recent progress on efficient and stable electrochemical systems includes the use of FeTPP catalysts that have prepositioned phenyl OH groups in their second coordination spheres. This has led to remarkable progressmore » in carrying out coupled proton-electron transfer reactions for CO₂ reduction. Such ground-breaking research has to be continued in order to produce renewable fuels in an economically feasible manner.« less
Proposed techniques for launching instrumented balloons into tornadoes
NASA Technical Reports Server (NTRS)
Grant, F. C.
1971-01-01
A method is proposed to introduce instrumented balloons into tornadoes by means of the radial pressure gradient, which supplies a buoyancy force driving to the center. Presented are analytical expressions, verified by computer calculations, which show the possibility of introducing instrumented balloons into tornadoes at or below the cloud base. The times required to reach the center are small enough that a large fraction of tornadoes are suitable for the technique. An experimental procedure is outlined in which a research airplane puts an instrumented, self-inflating balloon on the track ahead of the tornado. The uninflated balloon waits until the tornado closes to, typically, 750 meters; then it quickly inflates and spirals up and into the core, taking roughly 3 minutes. Since the drive to the center is automatically produced by the radial pressure gradient, a proper launch radius is the only guidance requirement.
Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis
NASA Technical Reports Server (NTRS)
Bradley, James R.
2012-01-01
This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.
Arduino: a low-cost multipurpose lab equipment.
D'Ausilio, Alessandro
2012-06-01
Typical experiments in psychological and neurophysiological settings often require the accurate control of multiple input and output signals. These signals are often generated or recorded via computer software and/or external dedicated hardware. Dedicated hardware is usually very expensive and requires additional software to control its behavior. In the present article, I present some accuracy tests on a low-cost and open-source I/O board (Arduino family) that may be useful in many lab environments. One of the strengths of Arduinos is the possibility they afford to load the experimental script on the board's memory and let it run without interfacing with computers or external software, thus granting complete independence, portability, and accuracy. Furthermore, a large community has arisen around the Arduino idea and offers many hardware add-ons and hundreds of free scripts for different projects. Accuracy tests show that Arduino boards may be an inexpensive tool for many psychological and neurophysiological labs.
NASA Astrophysics Data System (ADS)
Elliott, Thomas J.; Gu, Mile
2018-03-01
Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.
Draghici, Sorin; Tarca, Adi L; Yu, Longfei; Ethier, Stephen; Romero, Roberto
2008-03-01
The BioArray Software Environment (BASE) is a very popular MIAME-compliant, web-based microarray data repository. However in BASE, like in most other microarray data repositories, the experiment annotation and raw data uploading can be very timeconsuming, especially for large microarray experiments. We developed KUTE (Karmanos Universal daTabase for microarray Experiments), as a plug-in for BASE 2.0 that addresses these issues. KUTE provides an automatic experiment annotation feature and a completely redesigned data work-flow that dramatically reduce the human-computer interaction time. For instance, in BASE 2.0 a typical Affymetrix experiment involving 100 arrays required 4 h 30 min of user interaction time forexperiment annotation, and 45 min for data upload/download. In contrast, for the same experiment, KUTE required only 28 min of user interaction time for experiment annotation, and 3.3 min for data upload/download. http://vortex.cs.wayne.edu/kute/index.html.
Optimum Actuator Selection with a Genetic Algorithm for Aircraft Control
NASA Technical Reports Server (NTRS)
Rogers, James L.
2004-01-01
The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. For example, the desired actuators produce a pure roll moment without at the same time causing much pitch or yaw. For a typical wing, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements and mission constraints. A genetic algorithm has been developed for finding the best placement for four actuators to produce an uncoupled pitch moment. The genetic algorithm has been extended to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control. A simplified, untapered, unswept wing is the model for each application.
Electronics Shielding and Reliability Design Tools
NASA Technical Reports Server (NTRS)
Wilson, John W.; ONeill, P. M.; Zang, Thomas A., Jr.; Pandolf, John E.; Koontz, Steven L.; Boeder, P.; Reddell, B.; Pankop, C.
2006-01-01
It is well known that electronics placement in large-scale human-rated systems provides opportunity to optimize electronics shielding through materials choice and geometric arrangement. For example, several hundred single event upsets (SEUs) occur within the Shuttle avionic computers during a typical mission. An order of magnitude larger SEU rate would occur without careful placement in the Shuttle design. These results used basic physics models (linear energy transfer (LET), track structure, Auger recombination) combined with limited SEU cross section measurements allowing accurate evaluation of target fragment contributions to Shuttle avionics memory upsets. Electronics shielding design on human-rated systems provides opportunity to minimize radiation impact on critical and non-critical electronic systems. Implementation of shielding design tools requires adequate methods for evaluation of design layouts, guiding qualification testing, and an adequate follow-up on final design evaluation including results from a systems/device testing program tailored to meet design requirements.
Space station needs, attributes and architectural options study. Volume 3: Requirements
NASA Technical Reports Server (NTRS)
1983-01-01
A typical system specification format is presented and requirements are compiled. A Program Specification Tree is shown showing a high inclination space station and a low inclination space station with their typical element breakdown, also represented along the top blocks are the interfaces with other systems. The specification format is directed at the Low Inclination space station.
Simplified jet-A kinetic mechanism for combustor application
NASA Technical Reports Server (NTRS)
Lee, Chi-Ming; Kundu, Krishna; Ghorashi, Bahman
1993-01-01
Successful modeling of combustion and emissions in gas turbine engine combustors requires an adequate description of the reaction mechanism. For hydrocarbon oxidation, detailed mechanisms are only available for the simplest types of hydrocarbons such as methane, ethane, acetylene, and propane. These detailed mechanisms contain a large number of chemical species participating simultaneously in many elementary kinetic steps. Current computational fluid dynamic (CFD) models must include fuel vaporization, fuel-air mixing, chemical reactions, and complicated boundary geometries. To simulate these conditions a very sophisticated computer model is required, which requires large computer memory capacity and long run times. Therefore, gas turbine combustion modeling has frequently been simplified by using global reaction mechanisms, which can predict only the quantities of interest: heat release rates, flame temperature, and emissions. Jet fuels are wide-boiling-range hydrocarbons with ranges extending through those of gasoline and kerosene. These fuels are chemically complex, often containing more than 300 components. Jet fuel typically can be characterized as containing 70 vol pct paraffin compounds and 25 vol pct aromatic compounds. A five-step Jet-A fuel mechanism which involves pyrolysis and subsequent oxidation of paraffin and aromatic compounds is presented here. This mechanism is verified by comparing with Jet-A fuel ignition delay time experimental data, and species concentrations obtained from flametube experiments. This five-step mechanism appears to be better than the current one- and two-step mechanisms.
Non-iterative double-frame 2D/3D particle tracking velocimetry
NASA Astrophysics Data System (ADS)
Fuchs, Thomas; Hain, Rainer; Kähler, Christian J.
2017-09-01
In recent years, the detection of individual particle images and their tracking over time to determine the local flow velocity has become quite popular for planar and volumetric measurements. Particle tracking velocimetry has strong advantages compared to the statistical analysis of an ensemble of particle images by means of cross-correlation approaches, such as particle image velocimetry. Tracking individual particles does not suffer from spatial averaging and therefore bias errors can be avoided. Furthermore, the spatial resolution can be increased up to the sub-pixel level for mean fields. A maximization of the spatial resolution for instantaneous measurements requires high seeding concentrations. However, it is still challenging to track particles at high seeding concentrations, if no time series is available. Tracking methods used under these conditions are typically very complex iterative algorithms, which require expert knowledge due to the large number of adjustable parameters. To overcome these drawbacks, a new non-iterative tracking approach is introduced in this letter, which automatically analyzes the motion of the neighboring particles without requiring to specify any parameters, except for the displacement limits. This makes the algorithm very user friendly and also offers unexperienced users to use and implement particle tracking. In addition, the algorithm enables measurements of high speed flows using standard double-pulse equipment and estimates the flow velocity reliably even at large particle image densities.
Variability extraction and modeling for product variants.
Linsbauer, Lukas; Lopez-Herrejon, Roberto Erick; Egyed, Alexander
2017-01-01
Fast-changing hardware and software technologies in addition to larger and more specialized customer bases demand software tailored to meet very diverse requirements. Software development approaches that aim at capturing this diversity on a single consolidated platform often require large upfront investments, e.g., time or budget. Alternatively, companies resort to developing one variant of a software product at a time by reusing as much as possible from already-existing product variants. However, identifying and extracting the parts to reuse is an error-prone and inefficient task compounded by the typically large number of product variants. Hence, more disciplined and systematic approaches are needed to cope with the complexity of developing and maintaining sets of product variants. Such approaches require detailed information about the product variants, the features they provide and their relations. In this paper, we present an approach to extract such variability information from product variants. It identifies traces from features and feature interactions to their implementation artifacts, and computes their dependencies. This work can be useful in many scenarios ranging from ad hoc development approaches such as clone-and-own to systematic reuse approaches such as software product lines. We applied our variability extraction approach to six case studies and provide a detailed evaluation. The results show that the extracted variability information is consistent with the variability in our six case study systems given by their variability models and available product variants.
Generating large misalignments in gapped and binary discs
NASA Astrophysics Data System (ADS)
Owen, James E.; Lai, Dong
2017-08-01
Many protostellar gapped and binary discs show misalignments between their inner and outer discs; in some cases, ˜70° misalignments have been observed. Here, we show that these misalignments can be generated through a secular resonance between the nodal precession of the inner disc and the precession of the gap-opening (stellar or massive planetary) companion. An evolving protostellar system may naturally cross this resonance during its lifetime due to disc dissipation and/or companion migration. If resonance crossing occurs on the right time-scale, of the order of a few million years, characteristic for young protostellar systems, the inner and outer discs can become highly misaligned, with misalignments ≳ 60° typical. When the primary star has a mass of order a solar mass, generating a significant misalignment typically requires the companion to have a mass of ˜0.01-0.1 M⊙ and an orbital separation of tens of astronomical units. The recently observed companion in the cavity of the gapped, highly misaligned system HD 142527 satisfies these requirements, indicating that a previous resonance crossing event misaligned the inner and outer discs. Our scenario for HD 142527's misaligned discs predicts that the companion's orbital plane is aligned with the outer disc's; this prediction should be testable with future observations as the companion's orbit is mapped out. Misalignments observed in several other gapped disc systems could be generated by the same secular resonance mechanism.
Performance Comparison of EPICS IOC and MARTe in a Hard Real-Time Control Application
NASA Astrophysics Data System (ADS)
Barbalace, Antonio; Manduchi, Gabriele; Neto, A.; De Tommasi, G.; Sartori, F.; Valcarcel, D. F.
2011-12-01
EPICS is used worldwide mostly for controlling accelerators and large experimental physics facilities. Although EPICS is well fit for the design and development of automation systems, which are typically VME or PLC-based systems, and for soft real-time systems, it may present several drawbacks when used to develop hard real-time systems/applications especially when general purpose operating systems as plain Linux are chosen. This is in particular true in fusion research devices typically employing several hard real-time systems, such as the magnetic control systems, that may require strict determinism, and high performance in terms of jitter and latency. Serious deterioration of important plasma parameters may happen otherwise, possibly leading to an abrupt termination of the plasma discharge. The MARTe framework has been recently developed to fulfill the demanding requirements for such real-time systems that are alike to run on general purpose operating systems, possibly integrated with the low-latency real-time preemption patches. MARTe has been adopted to develop a number of real-time systems in different Tokamaks. In this paper, we first summarize differences and similarities between EPICS IOC and MARTe. Then we report on a set of performance measurements executed on an x86 64 bit multicore machine running Linux with an IO control algorithm implemented in an EPICS IOC and in MARTe.
The physics of large eruptions
NASA Astrophysics Data System (ADS)
Gudmundsson, Agust
2015-04-01
Based on eruptive volumes, eruptions can be classified as follows: small if the volumes are from less than 0.001 km3 to 0.1 km3, moderate if the volumes are from 0.1 to 10 km3, and large if the volumes are from 10 km3 to 1000 km3 or larger. The largest known explosive and effusive eruptions have eruptive volumes of 4000-5000 km3. The physics of small to moderate eruptions is reasonably well understood. For a typical mafic magma chamber in a crust that behaves as elastic, about 0.1% of the magma leaves the chamber (erupted and injected as a dyke) during rupture and eruption. Similarly, for a typical felsic magma chamber, the eruptive/injected volume during rupture and eruption is about 4%. To provide small to moderate eruptions, chamber volumes of the order of several tens to several hundred cubic kilometres would be needed. Shallow crustal chambers of these sizes are common, and deep-crustal and upper-mantle reservoirs of thousands of cubic kilometres exist. Thus, elastic and poro-elastic chambers of typical volumes can account for small to moderate eruptive volumes. When the eruptions become large, with volumes of tens or hundreds of cubic kilometres or more, an ordinary poro-elastic mechanism can no longer explain the eruptive volumes. The required sizes of the magma chambers and reservoirs to explain such volumes are simply too large to be plausible. Here I propose that the mechanics of large eruptions is fundamentally different from that of small to moderate eruptions. More specifically, I suggest that all large eruptions derive their magmas from chambers and reservoirs whose total cavity-volumes are mechanically reduced very much during the eruption. There are two mechanisms by which chamber/reservoir cavity-volumes can be reduced rapidly so as to squeeze out much of, or all, their magmas. One is piston-like caldera collapse. The other is graben subsidence. During large slip on the ring-faults/graben-faults the associated chamber/reservoir shrinks in volume, thereby maintaining the excess magmatic pressure much longer than is possible in the ordinary poro-elastic mechanism. Here the physics of caldera subsidence and graben subsidence is regarded as basically the same. The geometric difference in the surface expression is simply a reflection of the horizontal cross-sectional shape of the underlying magma body. In this new mechanism, the large eruption is the consequence -- not the cause -- of the caldera/graben subsidence. Thus, once the conditions for large-scale subsidence of a caldera/graben during an unrest period are established, then the likelihood of large to very large eruptions can be assessed and used in reliable forecasting. Gudmundsson, A., 2012. Strengths and strain energies of volcanic edifices: implications for eruptions, collapse calderas and landslides. Nat. Hazards Earth Syst. Sci., 12, 2241-2258. Gudmundsson, A., 2014. Energy release in great earthquakes and eruptions. Front. Earth Science 2:10. doi: 10.3389/feart.2014.00010 Gudmundsson, A., Acocella, V., 2015.Volcanotectonics: Understanding the Structure, Deformation, and Dynamics of Volcanoes. Cambridge University Press (published 2015).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Razafinjanahary, H.; Rogemond, F.; Chermette, H.
The MS-LSD method remains a method of interest when rapidity and small computer resources are required; its main drawback is some lack of accuracy, mainly due to the muffin-tin distribution of the potential. In the case of large clusters or molecules, the use of an empty sphere to fill, in part, the large intersphere region can improve greatly the results. Calculations bearing on C{sub 60} has been undertaken to underline this trend, because, on the one hand, the fullerenes exhibit a remarkable possibility to fit a large empty sphere in the center of the cluster and, on the other hand,more » numerous accurate calculations have already been published, allowing quantitative comparison with results. The author`s calculations suggest that in case of added empty sphere the results compare well with the results of more accurate calculations. The calculated electron affinity for C{sub 60} and C{sub 60}{sup {minus}} are in reasonable agreement with experimental values, but the stability of C{sub 60}{sup 2-} in gas phase is not found. 35 refs., 3 figs., 5 tabs.« less
Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities (Book)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2013-03-01
To accomplish Federal goals for renewable energy, sustainability, and energy security, large-scale renewable energy projects must be developed and constructed on Federal sites at a significant scale with significant private investment. The U.S. Department of Energy's Federal Energy Management Program (FEMP) helps Federal agencies meet these goals and assists agency personnel navigate the complexities of developing such projects and attract the necessary private capital to complete them. This guide is intended to provide a general resource that will begin to develop the Federal employee's awareness and understanding of the project developer's operating environment and the private sector's awareness and understandingmore » of the Federal environment. Because the vast majority of the investment that is required to meet the goals for large-scale renewable energy projects will come from the private sector, this guide has been organized to match Federal processes with typical phases of commercial project development. The main purpose of this guide is to provide a project development framework to allow the Federal Government, private developers, and investors to work in a coordinated fashion on large-scale renewable energy projects. The framework includes key elements that describe a successful, financially attractive large-scale renewable energy project.« less
Flexible Architecture for FPGAs in Embedded Systems
NASA Technical Reports Server (NTRS)
Clark, Duane I.; Lim, Chester N.
2012-01-01
Commonly, field-programmable gate arrays (FPGAs) being developed in cPCI embedded systems include the bus interface in the FPGA. This complicates the development because the interface is complicated and requires a lot of development time and FPGA resources. In addition, flight qualification requires a substantial amount of time be devoted to just this interface. Another complication of putting the cPCI interface into the FPGA being developed is that configuration information loaded into the device by the cPCI microprocessor is lost when a new bit file is loaded, requiring cumbersome operations to return the system to an operational state. Finally, SRAM-based FPGAs are typically programmed via specialized cables and software, with programming files being loaded either directly into the FPGA, or into PROM devices. This can be cumbersome when doing FPGA development in an embedded environment, and does not have an easy path to flight. Currently, FPGAs used in space applications are usually programmed via multiple space-qualified PROM devices that are physically large and require extra circuitry (typically including a separate one-time programmable FPGA) to enable them to be used for this application. This technology adds a cPCI interface device with a simple, flexible, high-performance backend interface supporting multiple backend FPGAs. It includes a mechanism for programming the FPGAs directly via the microprocessor in the embedded system, eliminating specialized hardware, software, and PROM devices and their associated circuitry. It has a direct path to flight, and no extra hardware and minimal software are required to support reprogramming in flight. The device added is currently a small FPGA, but an advantage of this technology is that the design of the device does not change, regardless of the application in which it is being used. This means that it needs to be qualified for flight only once, and is suitable for one-time programmable devices or an application specific integrated circuit (ASIC). An application programming interface (API) further reduces the development time needed to use the interface device in a system.
Identifying Severe Weather Impacts and Damage with Google Earth Engine
NASA Astrophysics Data System (ADS)
Molthan, A.; Burks, J. E.; Bell, J. R.
2015-12-01
Hazards associated with severe convective storms can lead to rapid changes in land surface vegetation. Depending upon the type of vegetation that has been impacted, their impacts can be relatively short lived, such as damage to seasonal crops that are eventually removed by harvest, or longer-lived, such as damage to a stand of trees or expanse of forest that require several years to recover. Since many remote sensing imagers provide their highest spatial resolution bands in the red and near-infrared to support monitoring of vegetation, these impacts can be readily identified as short-term and marked decreases in common vegetation indices such as NDVI, along with increases in land surface temperature that are observed at a reduced spatial resolution. The ability to identify an area of vegetation change is improved by understanding the conditions that are normal for a given time of year and location, along with a typical range of variability in a given parameter. This analysis requires a period of record well beyond the availability of near real-time data. These activities would typically require an analyst to download large volumes of data from sensors such as NASA's MODIS (aboard Terra and Aqua) or higher resolution imagers from the Landsat series of satellites. Google's Earth Engine offers a "big data" solution to these challenges, by providing a streamlined API and option to process the period of record of NASA MODIS and Landsat products through relatively simple Javascript coding. This presentation will highlight efforts to date in using Earth Engine holdings to produce vegetation and land surface temperature anomalies that are associated with damage to agricultural and other vegetation caused by severe thunderstorms across the Central and Southeastern United States. Earth Engine applications will show how large data holdings can be used to map severe weather damage, ascertain longer-term impacts, and share best practices learned and challenges with applying Earth Engine holdings to the analysis of severe weather damage. Other applications are also demonstrated, such as use of Earth Engine to prepare pre-event composites that can be used to subjectively identify other severe weather impacts. Future extension to flooding and wildfires is also proposed.
Low-Light-Shift Cesium Fountain without Mechanical Shutters
NASA Technical Reports Server (NTRS)
Enzer, Daphna
2008-01-01
A new technique for reducing errors in a laser-cooled cesium fountain frequency standard provides for strong suppression of the light shift without need for mechanical shutters. Because mechanical shutters are typically susceptible to failure after operating times of the order of months, the elimination of mechanical shutters could contribute significantly to the reliability of frequency standards that are required to function continuously for longer time intervals. With respect to the operation of an atomic-fountain frequency standard, the term "light shift" denotes an undesired relative shift in the two energy levels of the atoms (in this case, cesium atoms) in the atomic fountain during interrogation by microwaves. The shift in energy levels translates to a frequency shift that reduces the precision and possibly accuracy of the frequency standard. For reasons too complex to describe within the space available for this article, the light shift is caused by any laser light that reaches the atoms during the microwave- interrogation period, but is strongest for near-resonance light. In the absence of any mitigating design feature, the light shift, expressed as a fraction of the standard fs frequency, could be as large as approx. 2 x 10(exp -11), the largest error in the standard. In a typical prior design, to suppress light shift, the intensity of laser light is reduced during the interrogation period by using a single-pass acoustooptic modulator to deflect the majority of light away from the main optical path. Mechanical shutters are used to block the remaining undeflected light to ensure complete attenuation. Without shutters, this remaining undeflected light could cause a light shift of as much as .10.15, which is unacceptably large in some applications. The new technique implemented here involves additionally shifting the laser wavelength off resonance by a relatively large amount (typically of the order of nanometers) during microwave interrogation. In this design, when microwave interrogation is not underway, the atoms are illuminated by a slave laser locked to the lasing frequency of a lower power master laser.
NASA Technical Reports Server (NTRS)
Cramer, K. Elliott; Winfree, William P.
2006-01-01
The Nondestructive Evaluation Sciences Branch at NASA s Langley Research Center has been actively involved in the development of thermographic inspection techniques for more than 15 years. Since the Space Shuttle Columbia accident, NASA has focused on the improvement of advanced NDE techniques for the Reinforced Carbon-Carbon (RCC) panels that comprise the orbiter s wing leading edge. Various nondestructive inspection techniques have been used in the examination of the RCC, but thermography has emerged as an effective inspection alternative to more traditional methods. Thermography is a non-contact inspection method as compared to ultrasonic techniques which typically require the use of a coupling medium between the transducer and material. Like radiographic techniques, thermography can be used to inspect large areas, but has the advantage of minimal safety concerns and the ability for single-sided measurements. Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. A typical implementation of PCA is when the eigenvectors are generated from the data set being analyzed. Although it is a powerful tool for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the good material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued when a fixed set of eigenvectors is used to process the thermal data from the RCC materials. These eigen vectors can be generated either from an analytic model of the thermal response of the material under examination, or from a large cross section of experimental data. This paper will provide the details of the analytic model; an overview of the PCA process; as well as a quantitative signal-to-noise comparison of the results of performing both embodiments of PCA on thermographic data from various RCC specimens. Details of a system that has been developed to allow insitu inspection of a majority of shuttle RCC components will be presented along with the acceptance test results for this system. Additionally, the results of applying this technology to the Space Shuttle Discovery after its return from flight will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cutler, Dylan; Frank, Stephen; Slovensky, Michelle
Rich, well-organized building performance and energy consumption data enable a host of analytic capabilities for building owners and operators, from basic energy benchmarking to detailed fault detection and system optimization. Unfortunately, data integration for building control systems is challenging and costly in any setting. Large portfolios of buildings--campuses, cities, and corporate portfolios--experience these integration challenges most acutely. These large portfolios often have a wide array of control systems, including multiple vendors and nonstandard communication protocols. They typically have complex information technology (IT) networks and cybersecurity requirements and may integrate distributed energy resources into their infrastructure. Although the challenges are significant,more » the integration of control system data has the potential to provide proportionally greater value for these organizations through portfolio-scale analytics, comprehensive demand management, and asset performance visibility. As a large research campus, the National Renewable Energy Laboratory (NREL) experiences significant data integration challenges. To meet them, NREL has developed an architecture for effective data collection, integration, and analysis, providing a comprehensive view of data integration based on functional layers. The architecture is being evaluated on the NREL campus through deployment of three pilot implementations.« less
Large-format 17μm high-end VOx μ-bolometer infrared detector
NASA Astrophysics Data System (ADS)
Mizrahi, U.; Argaman, N.; Elkind, S.; Giladi, A.; Hirsh, Y.; Labilov, M.; Pivnik, I.; Shiloah, N.; Singer, M.; Tuito, A.; Ben-Ezra, M.; Shtrichman, I.
2013-06-01
Long range sights and targeting systems require a combination of high spatial resolution, low temporal NETD, and wide field of view. For practical electro-optical systems it is hard to support these constraints simultaneously. Moreover, achieving these needs with the relatively low-cost Uncooled μ-Bolometer technology is a major challenge in the design and implementation of both the bolometer pixel and the Readout Integrated Circuit (ROIC). In this work we present measured results from a new, large format (1024×768) detector array, with 17μm pitch. This detector meets the demands of a typical armored vehicle sight with its high resolution and large format, together with low NETD of better than 35mK (at F/1, 30Hz). We estimate a Recognition Range for a NATO target of better than 4 km at all relevant atmospheric conditions, which is better than standard 2nd generation scanning array cooled detector. A new design of the detector package enables improved stability of the Non-Uniformity Correction (NUC) to environmental temperature drifts.
Detecting Superior Face Recognition Skills in a Large Sample of Young British Adults
Bobak, Anna K.; Pampoulov, Philip; Bate, Sarah
2016-01-01
The Cambridge Face Memory Test Long Form (CFMT+) and Cambridge Face Perception Test (CFPT) are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognizers are discussed. PMID:27713706
Sabatino, Denise E.; Nichols, Timothy C.; Merricks, Elizabeth; Bellinger, Dwight A.; Herzog, Roland W.; Monahan, Paul E.
2013-01-01
The X-linked bleeding disorder hemophilia is caused by mutations in coagulation factor VIII (hemophilia A) or factor IX (hemophilia B). Unless prophylactic treatment is provided, patients with severe disease (less than 1% clotting activity) typically experience frequent spontaneous bleeds. Current treatment is largely based on intravenous infusion of recombinant or plasma-derived coagulation factor concentrate. More effective factor products are being developed. Moreover, gene therapies for sustained correction of hemophilia are showing much promise in pre-clinical studies and in clinical trials. These advances in molecular medicine heavily depend on availability of well-characterized small and large animal models of hemophilia, primarily hemophilia mice and dogs. Experiments in these animals represent important early and intermediate steps of translational research aimed at development of better and safer treatments for hemophilia, such a protein and gene therapies or immune tolerance protocols. While murine models are excellent for studies of large groups of animals using genetically defined strains, canine models are important for testing scale-up and for longer-term follow-up as well as for studies that require larger blood volumes. PMID:22137432
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blancon, Jean-Christophe Robert; Nie, Wanyi; Neukirch, Amanda J.
2016-04-27
Hybrid organic-inorganic perovskites have attracted considerable attention after promising developments in energy harvesting and other optoelectronic applications. However, further optimization will require a deeper understanding of the intrinsic photophysics of materials with relevant structural characteristics. Here, the dynamics of photoexcited charge carriers in large-area grain organic-inorganic perovskite thin films is investigated via confocal time-resolved photoluminescence spectroscopy. It is found that the bimolecular recombination of free charges is the dominant decay mechanism at excitation densities relevant for photovoltaic applications. Bimolecular coefficients are found to be on the order of 10 –9 cm 3 s –1, comparable to typical direct-gap semiconductors, yetmore » significantly smaller than theoretically expected. It is also demonstrated that there is no degradation in carrier transport in these thin films due to electronic impurities. Here, suppressed electron–hole recombination and transport that is not limited by deep level defects provide a microscopic model for the superior performance of large-area grain hybrid perovskites for photovoltaic applications.« less
The R-Shell approach - Using scheduling agents in complex distributed real-time systems
NASA Technical Reports Server (NTRS)
Natarajan, Swaminathan; Zhao, Wei; Goforth, Andre
1993-01-01
Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.
Optimization Based Efficiencies in First Order Reliability Analysis
NASA Technical Reports Server (NTRS)
Peck, Jeffrey A.; Mahadevan, Sankaran
2003-01-01
This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.
Maximizing RNA folding rates: a balancing act.
Thirumalai, D; Woodson, S A
2000-01-01
Large ribozymes typically require very long times to refold into their active conformation in vitro, because the RNA is easily trapped in metastable misfolded structures. Theoretical models show that the probability of misfolding is reduced when local and long-range interactions in the RNA are balanced. Using the folding kinetics of the Tetrahymena ribozyme as an example, we propose that folding rates are maximized when the free energies of forming independent domains are similar to each other. A prediction is that the folding pathway of the ribozyme can be reversed by inverting the relative stability of the tertiary domains. This result suggests strategies for optimizing ribozyme sequences for therapeutics and structural studies. PMID:10864039
HEP - A semaphore-synchronized multiprocessor with central control. [Heterogeneous Element Processor
NASA Technical Reports Server (NTRS)
Gilliland, M. C.; Smith, B. J.; Calvert, W.
1976-01-01
The paper describes the design concept of the Heterogeneous Element Processor (HEP), a system tailored to the special needs of scientific simulation. In order to achieve high-speed computation required by simulation, HEP features a hierarchy of processes executing in parallel on a number of processors, with synchronization being largely accomplished by hardware. A full-empty-reserve scheme of synchronization is realized by zero-one-valued hardware semaphores. A typical system has, besides the control computer and the scheduler, an algebraic module, a memory module, a first-in first-out (FIFO) module, an integrator module, and an I/O module. The architecture of the scheduler and the algebraic module is examined in detail.
Donovan, Carl; Harwood, John; King, Stephanie; Booth, Cormac; Caneco, Bruno; Walker, Cameron
2016-01-01
There are many developments for offshore renewable energy around the United Kingdom whose installation typically produces large amounts of far-reaching noise, potentially disturbing many marine mammals. The potential to affect the favorable conservation status of many species means extensive environmental impact assessment requirements for the licensing of such installation activities. Quantification of such complex risk problems is difficult and much of the key information is not readily available. Expert elicitation methods can be employed in such pressing cases. We describe the methodology used in an expert elicitation study conducted in the United Kingdom for combining expert opinions based on statistical distributions and copula-like methods.
Local laser-strengthening: Customizing the forming behavior of car body steel sheets
NASA Astrophysics Data System (ADS)
Wagner, M.; Jahn, A.; Beyer, E.; Balzani, D.
2018-05-01
Future trends in designing lightweight components especially for automotive applications increasingly require complex and delicate structures with highest possible level of capacity [1]. The manufacturing of metallic car body components is primarily realized by deep or stretch drawing. The forming process of especially cold rolled and large-sized components is typically characterized by inhomogeneous stress and strain distributions. As a result, the avoidance of undesirable deep drawing effects like earing and local necking is among the greatest challenges in forming complex car body structures [2]. Hence, a novel local laser-treatment approach with the objective of customizing the forming behavior of car body steel sheets is currently explored.
Mechanics of Constriction during Cell Division: A Variational Approach
Almendro-Vedia, Victor G.; Monroy, Francisco; Cao, Francisco J.
2013-01-01
During symmetric division cells undergo large constriction deformations at a stable midcell site. Using a variational approach, we investigate the mechanical route for symmetric constriction by computing the bending energy of deformed vesicles with rotational symmetry. Forces required for constriction are explicitly computed at constant area and constant volume, and their values are found to be determined by cell size and bending modulus. For cell-sized vesicles, considering typical bending modulus of , we calculate constriction forces in the range . The instability of symmetrical constriction is shown and quantified with a characteristic coefficient of the order of , thus evidencing that cells need a robust mechanism to stabilize constriction at midcell. PMID:23990888
Dot-gov: market failure and the creation of a national health information technology system.
Kleinke, J D
2005-01-01
The U.S. health care marketplace's continuing failure to adopt information technology (IT) is the result of economic problems unique to health care, business strategy problems typical of fragmented industries, and technology standardization problems common to infrastructure development in free-market economies. Given the information intensity of medicine, the quality problems associated with inadequate IT, the magnitude of U.S. health spending, and the large federal share of that spending, this market failure requires aggressive governmental intervention. Federal policies to compel the creation of a national health IT system would reduce aggregate health care costs and improve quality, goals that cannot be attained in the health care marketplace.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, D.J.; Warner, J.A.; LeBarron, N.
Processes that use energetic ions for large substrates require that the time-averaged erosion effects from the ion flux be uniform across the surface. A numerical model has been developed to determine this flux and its effects on surface etching of a silica/photoresist combination. The geometry of the source and substrate is very similar to a typical deposition geometry with single or planetary substrate rotation. The model was used to tune an inert ion-etching process that used single or multiple Kaufman sources to less than 3% uniformity over a 30-cm aperture after etching 8 {micro}m of material. The same model canmore » be used to predict uniformity for ion-assisted deposition (IAD).« less
Large Area Sputter Coating on Glass
NASA Astrophysics Data System (ADS)
Katayama, Yoshihito
Large glass has been used for commercial buildings, housings and vehicles for many years. Glass size for flat displays is getting larger and larger. The glass for the 8th generation is more than 5 m2 in area. Demand of the large glass is increasing not only in these markets but also in a solar cell market growing drastically. Therefore, large area coating is demanded to plus something else on glass more than ever. Sputtering and pyrolysis are the major coating methods on large glass today. Sputtering process is particularly popular because it can deposit a wide variety of materials in good coating uniformity on the glass. This paper describes typical industrial sputtering system and recent progress in sputtering technology. It also shows typical coated glass products in architectural, automotive and display fields and comments on their functions, film stacks and so on.
On Improving the Quality of Gas Tungsten Arc Welded 18Ni 250 Maraging Steel Rocket Motor Casings
NASA Astrophysics Data System (ADS)
Gupta, Renu N.; Raja, V. S.; Mukherjee, M. K.; Narayana Murty, S. V. S.
2017-10-01
In view of their excellent combination of strength and toughness, maraging steels (18Ni 250 grade) are widely used for the fabrication of large sized solid rocket motor casings. Gas tungsten arc welding is commonly employed to fabricate these thin walled metallic casings, as the technique is not only simple but also provides the desired mechanical properties. However, sometimes, radiographic examination of welds reveals typical unacceptable indications requiring weld repair. As a consequence, there is a significant drop in weld efficiency and productivity. In this work, the nature and the cause of the occurrence of these defects have been investigated and an attempt is made to overcome the problem. It has been found that weld has a tendency to form typical Ca and Al oxide inclusions leading to the observed defects. The use of calcium fluoride flux has been found to produce a defect free weld with visible effect on weld bead finish. The flux promotes the separation of inclusions, refines the grain size and leads to significant improvement in mechanical properties of the weldment.
Hyperbolic umbilic caustics from oblate water drops with tilted illumination: Observations
NASA Astrophysics Data System (ADS)
Jobe, Oli; Thiessen, David B.; Marston, Philip L.
2017-11-01
Various groups have reported observations of hyperbolic umbilic diffraction catastrophe patterns in the far-field scattering by oblate acoustically levitated drops with symmetric illumination. In observations of that type the drop's symmetry axis is vertical and the illuminating light beam (typically an expanded laser beam) travels horizontally. In the research summarized here, scattering patterns in the primary rainbow region and drop measurements were recorded with vertically tilted laser beam illumination having a grazing angle as large as 4 degrees. The findings from these observations may be summarized as follows: (a) It remains possible to adjust the drop aspect ratio (diameter/height) = D/H so as to produce a V-shaped hyperbolic umbilic focal section (HUFS) in the far-field scattering. (b) The shift in the required D/H was typically an increase of less than 1% and was quadratic in the tilt. (c) The apex of the V-shaped HUFS was shifted vertically by an amount proportional to the tilt with a coefficient close to unity. The levitated drops had negligible up-down asymmetry. Our method of investigation should be useful for other generalized rainbows with tilted illumination.
An Inexpensive Apparatus for Growing Photosynthetic Microorganisms in Exotic Atmospheres
NASA Astrophysics Data System (ADS)
Thomas, David J.; Herbert, Stephen K.
2005-02-01
Given the need for a light source, cyanobacteria and other photosynthetic microorganisms can be difficult and expensive to grow in large quantities. Lighted growth chambers and incubators typically cost 50-100% more than standard microbiological incubators. Self-shading of cells in liquid cultures prevents the growth of dense suspensions. Growing liquid cultures on a shaker table or lighted shaker incubator achieves greater cell densities, but adds considerably to the cost. For experiments in which gases other than air are required, the cost for conventional incubators increases even more. We describe an apparatus for growing photosynthetic organisms in exotic atmospheres that can be built relatively inexpensively (approximately $100 U.S.) using parts available from typical hardware or department stores (e.g., Wal-mart or K-mart). The apparatus uses microfiltered air (or other gases) to aerate, agitate, and mix liquid cultures, thus achieving very high cell densities (A750 > 3). Because gases are delivered to individual culture tubes, a variety of gas mixes can be used without the need for enclosed chambers. The apparatus works with liquid cultures of unicellular and filamentous species, and also works with agar slants.
NASA Astrophysics Data System (ADS)
Harkay, Gregory
2001-11-01
Interest on the part of the Physics Department at KSC in developing a computer interfaced lab with appeal to biology majors and a need to perform a clinical pulmonological study to fulfill a biology requirement led to the author's undergraduate research project in which a recording spirometer (typical cost: $15K) was constructed from readily available materials and a typical undergraduate lab computer interface. Simple components, including a basic photogate circuit, CPU fan, and PVC couplings were used to construct an instrument for measuring flow rates as a function of time. Pasco software was used to build an experiment in which data was collected and integration performed such that one could obtain accurate values for FEV1 (forced expiratory volume for one second) and FVC (forced vital capacity) and their ratio for a large sample of subjects. Results were compared to published norms and subjects with impaired respiratory mechanisms identified. This laboratory exercise is one with which biology students can clearly identify and would be a robust addition to the repertoire for a HS or college physics or biology teaching laboratory.
Britten, Patricia; Cleveland, Linda E; Koegel, Kristin L; Kuczynski, Kevin J; Nickols-Richardson, Sharon M
2012-10-01
The US Department of Agriculture (USDA) Food Patterns, released as part of the 2010 Dietary Guidelines for Americans, are designed to meet nutrient needs without exceeding energy requirements. They identify amounts to consume from each food group and recommend that nutrient-dense forms-lean or low-fat, without added sugars or salt-be consumed. Americans fall short of most food group intake targets and do not consume foods in nutrient-dense forms. Intake of calories from solid fats and added sugars exceed maximum limits by large margins. Our aim was to determine the potential effect on meeting USDA Food Pattern nutrient adequacy and moderation goals if Americans consumed the recommended quantities from each food group, but did not implement the advice to select nutrient-dense forms of food and instead made more typical food choices. Food-pattern modeling analysis using the USDA Food Patterns, which are structured to allow modifications in one or more aspects of the patterns, was used. Nutrient profiles for each food group were modified by replacing each nutrient-dense representative food with a similar but typical choice. Typical nutrient profiles were used to determine the energy and nutrient content of the food patterns. Moderation goals are not met when amounts of food in the USDA Food Patterns are followed and typical rather than nutrient-dense food choices are made. Energy, total fat, saturated fat, and sodium exceed limits in all patterns, often by substantial margins. With typical choices, calories were 15% to 30% (ie, 350 to 450 kcal) above the target calorie level for each pattern. Adequacy goals were not substantially affected by the use of typical food choices. If consumers consume the recommended quantities from each food group and subgroup, but fail to choose foods in low-fat, no-added-sugars, and low-sodium forms, they will not meet the USDA Food Patterns moderation goals or the 2010 Dietary Guidelines for Americans. Copyright © 2012 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
HBLAST: Parallelised sequence similarity--A Hadoop MapReducable basic local alignment search tool.
O'Driscoll, Aisling; Belogrudov, Vladislav; Carroll, John; Kropp, Kai; Walsh, Paul; Ghazal, Peter; Sleator, Roy D
2015-04-01
The recent exponential growth of genomic databases has resulted in the common task of sequence alignment becoming one of the major bottlenecks in the field of computational biology. It is typical for these large datasets and complex computations to require cost prohibitive High Performance Computing (HPC) to function. As such, parallelised solutions have been proposed but many exhibit scalability limitations and are incapable of effectively processing "Big Data" - the name attributed to datasets that are extremely large, complex and require rapid processing. The Hadoop framework, comprised of distributed storage and a parallelised programming framework known as MapReduce, is specifically designed to work with such datasets but it is not trivial to efficiently redesign and implement bioinformatics algorithms according to this paradigm. The parallelisation strategy of "divide and conquer" for alignment algorithms can be applied to both data sets and input query sequences. However, scalability is still an issue due to memory constraints or large databases, with very large database segmentation leading to additional performance decline. Herein, we present Hadoop Blast (HBlast), a parallelised BLAST algorithm that proposes a flexible method to partition both databases and input query sequences using "virtual partitioning". HBlast presents improved scalability over existing solutions and well balanced computational work load while keeping database segmentation and recompilation to a minimum. Enhanced BLAST search performance on cheap memory constrained hardware has significant implications for in field clinical diagnostic testing; enabling faster and more accurate identification of pathogenic DNA in human blood or tissue samples. Copyright © 2015 Elsevier Inc. All rights reserved.
Zhang, Chunlin; Geng, Xuesong; Wang, Hao; Zhou, Lei; Wang, Boguang
2017-01-01
Atmospheric ammonia (NH 3 ), a common alkaline gas found in air, plays a significant role in atmospheric chemistry, such as in the formation of secondary particles. However, large uncertainties remain in the estimation of ammonia emissions from nonagricultural sources, such as wastewater treatment plants (WWTPs). In this study, the ammonia emission factors from a large WWTP utilizing three typical biological treatment techniques to process wastewater in South China were calculated using the US EPA's WATER9 model with three years of raw sewage measurements and information about the facility. The individual emission factors calculated were 0.15 ± 0.03, 0.24 ± 0.05, 0.29 ± 0.06, and 0.25 ± 0.05 g NH 3 m -3 sewage for the adsorption-biodegradation activated sludge treatment process, the UNITANK process (an upgrade of the sequencing batch reactor activated sludge treatment process), and two slightly different anaerobic-anoxic-oxic treatment processes, respectively. The overall emission factor of the WWTP was 0.24 ± 0.06 g NH 3 m -3 sewage. The pH of the wastewater influent is likely an important factor affecting ammonia emissions, because higher emission factors existed at higher pH values. Based on the ammonia emission factor generated in this study, sewage treatment accounted for approximately 4% of the ammonia emissions for the urban area of South China's Pearl River Delta (PRD) in 2006, which is much less than the value of 34% estimated in previous studies. To reduce the large uncertainty in the estimation of ammonia emissions in China, more field measurements are required. Copyright © 2016 Elsevier Ltd. All rights reserved.
Megapixel mythology and photospace: estimating photospace for camera phones from large image sets
NASA Astrophysics Data System (ADS)
Hultgren, Bror O.; Hertel, Dirk W.
2008-01-01
It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.
16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...
16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension
Code of Federal Regulations, 2011 CFR
2011-01-01
... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...
Collaborative Problem Solving in Young Typical Development and HFASD
ERIC Educational Resources Information Center
Kimhi, Yael; Bauminger-Zviely, Nirit
2012-01-01
Collaborative problem solving (CPS) requires sharing goals/attention and coordinating actions--all deficient in HFASD. Group differences were examined in CPS (HFASD/typical), with a friend versus with a non-friend. Participants included 28 HFASD and 30 typical children aged 3-6 years and their 58 friends and 58 non-friends. Groups were matched on…
16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension
Code of Federal Regulations, 2013 CFR
2013-01-01
... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...
16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension
Code of Federal Regulations, 2012 CFR
2012-01-01
... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...
16 CFR Figure 5 to Part 1512 - Typical Handbrake Actuator Showing Grip Dimension
Code of Federal Regulations, 2014 CFR
2014-01-01
... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Typical Handbrake Actuator Showing Grip Dimension 5 Figure 5 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Fig. 5 Figure 5 to Part 1512—Typical Handbrake Actuator Showing Grip Dimension...
14 CFR Appendix C to Part 1215 - Typical User Activity Timeline
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Typical User Activity Timeline C Appendix C... RELAY SATELLITE SYSTEM (TDRSS) Pt. 1215, App. C Appendix C to Part 1215—Typical User Activity Timeline... mission model. 3 years before launch (Ref. § 1215.109(c). Submit general user requirements to permit...
Barrett, Barbara; Mosweu, Iris; Jones, Catherine Rg; Charman, Tony; Baird, Gillian; Simonoff, Emily; Pickles, Andrew; Happé, Francesca; Byford, Sarah
2015-07-01
Autism spectrum disorder is a complex condition that requires specialised care. Knowledge of the costs of autism spectrum disorder, especially in comparison with other conditions, may be useful to galvanise policymakers and leverage investment in education and intervention to mitigate aspects of autism spectrum disorder that negatively impact individuals with the disorder and their families. This article describes the services and associated costs for four groups of individuals: adolescents with autistic disorder, adolescents with other autism spectrum disorders, adolescents with other special educational needs and typically developing adolescents using data from a large, well-characterised cohort assessed as part of the UK Special Needs and Autism Project at the age of 12 years. Average total costs per participant over 6 months were highest in the autistic disorder group (£11,029), followed by the special educational needs group (£9268), the broader autism spectrum disorder group (£8968) and the typically developing group (£2954). Specialised day or residential schooling accounted for the vast majority of costs. In regression analysis, lower age and lower adaptive functioning were associated with higher costs in the groups with an autism spectrum disorder. Sex, ethnicity, number of International Classification of Diseases (10th revision) symptoms, autism spectrum disorder symptom scores and levels of mental health difficulties were not associated with cost. © The Author(s) 2014.
Childhood Rituals and Executive Functions
ERIC Educational Resources Information Center
Tregay, Jenifer; Gilmour, Jane; Charman, Tony
2009-01-01
Repetitive and ritualistic behaviours (RRBs) are a feature of both typical and atypical development. While the cognitive correlates of these behaviours have been investigated in some neurodevelopmental conditions these links remain largely unexplored in typical development. The current study examined the relationship between RRBs and executive…
NASA Astrophysics Data System (ADS)
Wang, B.; Bauer, S.; Pfeiffer, W. T.
2015-12-01
Large scale energy storage will be required to mitigate offsets between electric energy demand and the fluctuating electric energy production from renewable sources like wind farms, if renewables dominate energy supply. Porous formations in the subsurface could provide the large storage capacities required if chemical energy carriers such as hydrogen gas produced during phases of energy surplus are stored. This work assesses the behavior of a porous media hydrogen storage operation through numerical scenario simulation of a synthetic, heterogeneous sandstone formation formed by an anticlinal structure. The structural model is parameterized using data available for the North German Basin as well as data given for formations with similar characteristics. Based on the geological setting at the storage site a total of 15 facies distributions is generated and the hydrological parameters are assigned accordingly. Hydraulic parameters are spatially distributed according to the facies present and include permeability, porosity relative permeability and capillary pressure. The storage is designed to supply energy in times of deficiency on the order of seven days, which represents the typical time span of weather conditions with no wind. It is found that using five injection/extraction wells 21.3 mio sm³ of hydrogen gas can be stored and retrieved to supply 62,688 MWh of energy within 7 days. This requires a ratio of working to cushion gas of 0.59. The retrievable energy within this time represents the demand of about 450000 people. Furthermore it is found that for longer storage times, larger gas volumes have to be used, for higher delivery rates additionally the number of wells has to be increased. The formation investigated here thus seems to offer sufficient capacity and deliverability to be used for a large scale hydrogen gas storage operation.
Implications of the Large O VI Columns around Low-redshift L ∗ Galaxies
NASA Astrophysics Data System (ADS)
McQuinn, Matthew; Werk, Jessica K.
2018-01-01
Observations reveal massive amounts of O VI around star-forming L * galaxies, with covering fractions of near unity extending to the host halo’s virial radius. This O VI absorption is typically kinematically centered upon photoionized gas, with line widths that are suprathermal and kinematically offset from the galaxy. We discuss various scenarios and whether they could result in the observed phenomenology (cooling gas flows, boundary layers, shocks, virialized gas). If collisionally ionized, as we argue is most probable, the O VI observations require that the circumgalactic medium (CGM) of L * galaxies holds nearly all of the associated baryons within a virial radius (∼ {10}11 {M}ȯ ) and hosts massive flows of cooling gas with ≈ 30[{nT}/30 {{cm}}-3 {{K}}] {M}ȯ {{yr}}-1, which must be largely prevented from accreting onto the host galaxy. Cooling and feedback energetics considerations require 10< {nT}< 100 cm‑3 K for the warm and hot halo gases. We argue that virialized gas, boundary layers, hot winds, and shocks are unlikely to directly account for the bulk of the O VI. Furthermore, we show that there is a robust constraint on the number density of many of the photoionized ∼ {10}4 {{K}} absorption systems that yields upper bounds in the range n< (0.1-3) × {10}-3(Z/0.3) cm‑3, suggesting that the dominant pressure in some photoionized clouds is nonthermal. This constraint is in accordance with the low densities inferred from more complex photoionization modeling. The large amount of cooling gas that is inferred could re-form these clouds in a fraction of the halo dynamical time, and it requires much of the feedback energy available from supernovae to be dissipated in the CGM.
Toward a RPC-based muon tomography system for cargo containers.
NASA Astrophysics Data System (ADS)
Baesso, P.; Cussans, D.; Thomay, C.; Velthuis, J.
2014-10-01
A large area scanner for cosmic muon tomography is currently being developed at University of Bristol. Thanks to their abundance and penetrating power, cosmic muons have been suggested as ideal candidates to scan large containers in search of special nuclear materials, which are characterized by high-Z and high density. The feasibility of such a scanner heavily depends on the detectors used to track the muons: for a typical container, the minimum required sensitive area is of the order of 100 2. The spatial resolution required depends on the geometrical configuration of the detectors. For practical purposes, a resolution of the order of 1 mm or better is desirable. A good time resolution can be exploited to provide momentum information: a resolution of the order of nanoseconds can be used to separate sub-GeV muons from muons with higher energies. Resistive plate chambers have a low cost per unit area and good spatial and time resolution; these features make them an excellent choice as detectors for muon tomography. In order to instrument a large area demonstrator we have produced 25 new readout boards and 30 glass RPCs. The RPCs measure 1800 mm× 600 mm and are read out using 1.68 mm pitch copper strips. The chambers were tested with a standardized procedure, i.e. without optimizing the working parameters to take into account differences in the manufacturing process, and the results show that the RPCs have an efficiency between 87% and 95%. The readout electronics show a signal to noise ratio greater than 20 for minimum ionizing particles. Spatial resolution better than 500 μm can easily be achieved using commercial read out ASICs. These results are better than the original minimum requirements to pass the tests and we are now ready to install the detectors.
The composite TTG series: evidence for a non-unique tectonic setting for Archaean crustal growth.
NASA Astrophysics Data System (ADS)
Moyen, Jean-François
2010-05-01
The geodynamic context of formation of the Archaean TTG (tonalite-trondhjemite-granodiorite) series, the dominant component of the Archaean continental crust, is a matter of debate. The two end-member models for TTG formation are melting of the basaltic slab in a "hot subduction"; and intra-plate melting of basaltic rocks at the base of thick crust (oceanic plateau?). Both models do however predict strikingly different geothermal gradients, as in the modern Earth a typical subduction gradient is less than 10 °C/km compared to > 25-30 °C/km in the case of plateau melting. Using a large database of published TTG compositions, and filtering it to remove rocks that do not match the definition of TTG, it is possible to show that the TTG series is actually composite and made of a range of geochemically identifiable components that can be referred to as low-, medium- and high-pressure groups. The geochemistry of the low-pressure group (low Al, Na, Sr, relatively high Y and Nb) is consistent with derivation from a plagioclase and garnet- amphibolite; the medium-pressure group was formed in equilibrium with a garnet-rich, plagioclase-poor amphibolite, whereas the high pressure group derived from a rutile bearing eclogite. As the temperature of melting of metamafic rocks is largely independent from pressure, this corresponds to melting along a range of contrasting geothermal gradients. The low pressure group requires gradients of 10-12 °C/km, whereas the gradient required for the low pressure group can be as high as 25—30 °C/km. Regardless of the preferred tectonic model for the Archaean, such a range of gradients requires an equally large range of tectonic sites for the formation of the Archaean continental crust.
Quantifying the energy required for groundwater pumping across a regional aquifer system
NASA Astrophysics Data System (ADS)
Ronayne, M. J.; Shugert, D. T.
2017-12-01
Groundwater pumping can be a substantial source of energy expenditure, particularly in semiarid regions with large depths to water. In this study we assessed the energy required for groundwater pumping in the Denver Basin aquifer system, a group of sedimentary rock aquifers used for municipal water supply in Colorado. In recent decades, declining water levels in the Denver Basin aquifers has resulted in increased pumping lifts and higher energy use rates. We quantified the spatially variable energy intensity for groundwater pumping by analyzing spatial variations in the lift requirement. The median energy intensities for two major aquifers were 1.2 and 1.8 kWh m-3. Considering typical municipal well production rates and household water use in the study area, these results indicate that the energy cost associated with groundwater pumping can be a significant fraction (>20%) of the total electricity consumption for all household end uses. Pumping at this scale (hundreds of municipal wells producing from deep aquifers) also generates substantial greenhouse gas emissions. Analytical wellfield modeling conducted as part of this study clearly demonstrates how multiple components of the lift impact the energy requirement. Results provide guidance for water management strategies that reduce energy expenditure.
A Starshade Petal Error Budget for Exo-Earth Detection and Characterization
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Marchen, Luis; Lisman, P. Douglas; Cady, Eric; Martin, Stefan; Thomson, Mark; Dumont, Philip; Kasdin, N. Jeremy
2011-01-01
We present a starshade error budget with engineering requirements that are well within the current manufacturing and metrology capabilities. The error budget is based on an observational scenario in which the starshade spins about its axis on timescales short relative to the zodi-limited integration time, typically several hours. The scatter from localized petal errors is smoothed into annuli around the center of the image plane, resulting in a large reduction in the background flux variation while reducing thermal gradients caused by structural shadowing. Having identified the performance sensitivity to petal shape errors with spatial periods of 3-4 cycles/petal as the most challenging aspect of the design, we have adopted and modeled a manufacturing approach that mitigates these perturbations with 1-meter-long precision edge segments positioned using commercial metrology that readily meets assembly requirements. We have performed detailed thermal modeling and show that the expected thermal deformations are well within the requirements as well. We compare the requirements for four cases: a 32 meter diameter starshade with a 1.5 meter telescope, analyzed at 75 and 90 milliarcseconds, and a 40 meter diameter starshade with a 4 meter telescope, analyzed at 60 and 75 milliarcseconds.
High-speed aerodynamic design of space vehicle and required hypersonic wind tunnel facilities
NASA Astrophysics Data System (ADS)
Sakakibara, Seizou; Hozumi, Kouichi; Soga, Kunio; Nomura, Shigeaki
Problems associated with the aerodynamic design of space vehicles with emphasis of the role of hypersonic wind tunnel facilities in the development of the vehicle are considered. At first, to identify wind tunnel and computational fluid dynamics (CFD) requirements, operational environments are postulated for hypervelocity vehicles. Typical flight corridors are shown with the associated flow density: real gas effects, low density flow, and non-equilibrium flow. Based on an evaluation of these flight regimes and consideration of the operational requirements, the wind tunnel testing requirements for the aerodynamic design are examined. Then, the aerodynamic design logic and optimization techniques to develop and refine the configurations in a traditional phased approach based on the programmatic design of space vehicle are considered. Current design methodology for the determination of aerodynamic characteristics for designing the space vehicle, i.e., (1) ground test data, (2) numerical flow field solutions and (3) flight test data, are also discussed. Based on these considerations and by identifying capabilities and limits of experimental and computational methods, the role of a large conventional hypersonic wind tunnel and the high enthalpy tunnel and the interrelationship of the wind tunnels and CFD methods in actual aerodynamic design and analysis are discussed.
A nuclear F-actin scaffold stabilizes ribonucleoprotein droplets against gravity in large cells.
Feric, Marina; Brangwynne, Clifford P
2013-10-01
The size of a typical eukaryotic cell is of the order of ∼10 μm. However, some cell types grow to very large sizes, including oocytes (immature eggs) of organisms from humans to starfish. For example, oocytes of the frog Xenopus laevis grow to a diameter ≥1 mm. They have a correspondingly large nucleus (germinal vesicle) of ∼450 μm in diameter, which is similar to smaller somatic nuclei, but contains a significantly higher concentration of actin. The form and structure of this nuclear actin remain controversial, and its potential mechanical role within these large nuclei is unknown. Here, we use a microrheology and quantitative imaging approach to show that germinal vesicles contain an elastic F-actin scaffold that mechanically stabilizes these large nuclei against gravitational forces, which are usually considered negligible within cells. We find that on actin disruption, ribonucleoprotein droplets, including nucleoli and histone locus bodies, undergo gravitational sedimentation and fusion. We develop a model that reveals how gravity becomes an increasingly potent force as cells and their nuclei grow larger than ∼10 μm, explaining the requirement for a stabilizing nuclear F-actin scaffold in large Xenopus oocytes. All life forms are subject to gravity, and our results may have broad implications for cell growth and size control.
A nuclear F-actin scaffold stabilizes RNP droplets against gravity in large cells
Feric, Marina; Brangwynne, Clifford P.
2013-01-01
The size of a typical eukaryotic cell is on the order of ≈10 μm. However, some cell types grow to very large sizes, including oocytes (immature eggs) of organisms from humans to starfish. For example, oocytes of the frog X. laevis grow to a diameter ≥1 mm. They contain a correspondingly large nucleus (germinal vesicle, GV) of ≈450 μm in diameter, which is similar to smaller somatic nuclei, but contains a significantly higher concentration of actin. The form and structure of this nuclear actin remain controversial, and its potential mechanical role within these large nuclei is unknown. Here, we use a microrheology and quantitative imaging approach to show that GVs contain an elastic F-actin scaffold that mechanically stabilizes these large nuclei against gravitational forces, which are usually considered negligible within cells. We find that upon actin disruption, RNA/protein droplets, including nucleoli and histone locus bodies (HLBs), undergo gravitational sedimentation and fusion. We develop a model that reveals how gravity becomes an increasingly potent force as cells and their nuclei grow larger than ≈10 μm, explaining the requirement for a stabilizing nuclear F-actin scaffold in large X. laevis ooctyes. All life forms are subject to gravity, and our results may have broad implications for cell growth and size control. PMID:23995731
The Role of Subsurface Water in Carving Hesperian Amphitheater-Headed Valleys
NASA Astrophysics Data System (ADS)
Lapotre, M. G. A.; Lamb, M. P.
2017-12-01
Groundwater sapping may play a role in valley formation in rare cases on Earth, typically in sand or weakly cemented sandstones. Small-scale valleys resulting from groundwater seepage in loose sand typically have amphitheater-shaped canyon heads with roughly uniform widths. By analogy to terrestrial sapping valleys, Hesperian-aged amphitheater canyons on Mars have been interpreted to result from groundwater sapping, with implications for subsurface and surface water flows on ancient Mars. However, other studies suggest that martian amphitheater canyons carved in fractured rock may instead result from large overland floods, by analogy to dry cataracts in scabland terrains in the northwestern U.S. Understanding the formation of bedrock canyons is critical to our understanding of liquid water reservoirs on ancient Mars. Can groundwater sapping carve canyons in substrates other than sand? There is currently no model to predict the necessary conditions for groundwater to carve canyons in substrates ranging from loose sediment of various sizes to competent rock. To bridge this knowledge gap, we formulate a theoretical model coupling equations of groundwater flow and sediment transport that can be applied to a wide range of substrates. The model is used to infer whether groundwater sapping could have carved canyons in the absence of overland flows, and requires limited inputs that are measureable in the field or from orbital images. Model results show that sapping erosion is capable of forming canyons, but only in loose well-sorted sand. Coarser sediment is more permeable, but more difficult to transport. Finer sediment is more easily transported, but lower permeability precludes the necessary seepage discharge. Finally, fractured rock is highly permeable, but seepage discharges are far below those required to transport typical talus boulders. Using orbiter-based lithological constraints, we conclude that canyons near Echus Chasma are carved into bedrock and therefore required high-discharge overland flow during formation. These results have implications for Hesperian hydrology; while water volumes to carve sapping versus flood canyons need not be significantly different, erosion rates are orders of magnitude faster in the flood scenario, implying brief periods of abundant surface water on Hesperian Mars.
NASA Technical Reports Server (NTRS)
Swenson, Paul
2017-01-01
Satellite/Payload Ground Systems - Typically highly-customized to a specific mission's use cases - Utilize hundreds (or thousands!) of specialized point-to-point interfaces for data flows / file transfers Documentation and tracking of these complex interfaces requires extensive time to develop and extremely high staffing costs Implementation and testing of these interfaces are even more cost-prohibitive, and documentation often lags behind implementation resulting in inconsistencies down the road With expanding threat vectors, IT Security, Information Assurance and Operational Security have become key Ground System architecture drivers New Federal security-related directives are generated on a daily basis, imposing new requirements on current / existing ground systems - These mandated activities and data calls typically carry little or no additional funding for implementation As a result, Ground System Sustaining Engineering groups and Information Technology staff continually struggle to keep up with the rolling tide of security Advancing security concerns and shrinking budgets are pushing these large stove-piped ground systems to begin sharing resources - I.e. Operational / SysAdmin staff, IT security baselines, architecture decisions or even networks / hosting infrastructure Refactoring these existing ground systems into multi-mission assets proves extremely challenging due to what is typically very tight coupling between legacy components As a result, many "Multi-Mission" ops. environments end up simply sharing compute resources and networks due to the difficulty of refactoring into true multi-mission systems Utilizing continuous integration / rapid system deployment technologies in conjunction with an open architecture messaging approach allows System Engineers and Architects to worry less about the low-level details of interfaces between components and configuration of systems GMSEC messaging is inherently designed to support multi-mission requirements, and allows components to aggregate data across multiple homogeneous or heterogeneous satellites or payloads - The highly-successful Goddard Science and Planetary Operations Control Center (SPOCC) utilizes GMSEC as the hub for it's automation and situational awareness capability Shifts focus towards getting GS to a final configuration-managed baseline, as well as multi-mission / big-picture capabilities that help increase situational awareness, promote cross-mission sharing and establish enhanced fleet management capabilities across all levels of the enterprise.
NASA Technical Reports Server (NTRS)
Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.
1992-01-01
Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.
PAWS locker: a passively aligned internal wavelength locker for telecommunications lasers
NASA Astrophysics Data System (ADS)
Boye, Robert R.; Te Kolste, Robert; Kathman, Alan D.; Cruz-Cabrera, Alvaro; Knight, Douglas; Hammond, J. Barney
2003-11-01
This paper presents the passively aligned Wavesetter (PAWS) locker: a micro-optic subassembly for use as an internal wavelength locker. As the wavelength spacing in dense wavelength division multiplexing (WDM) decreases, the performance demands placed upon source lasers increase. The required wavelength stability has led to the use of external wavelength lockers utilizing air-spaced, thermally stabilized etalons. However, package constraints are forcing the integration of the wavelength locker directly into the laser module. These etalons require active tuning be done during installation of the wavelength locker as well as active temperature control (air-spaced etalons are typically too large for laser packages). A unique locking technique will be introduced that does not require an active alignment or active temperature compensation. Using the principles of phase shifting interferometry, a locking signal is derived without the inherent inflection points present in the signal of an etalon. The theoretical background of PAWS locker will be discussed as well as practical considerations for its implementation. Empirical results will be presented including wavelength accuracy, alignment sensitivity and thermal performance.
Public Reception of Climate Science: Coherence, Reliability, and Independence.
Hahn, Ulrike; Harris, Adam J L; Corner, Adam
2016-01-01
Possible measures to mitigate climate change require global collective actions whose impacts will be felt by many, if not all. Implementing such actions requires successful communication of the reasons for them, and hence the underlying climate science, to a degree that far exceeds typical scientific issues which do not require large-scale societal response. Empirical studies have identified factors, such as the perceived level of consensus in scientific opinion and the perceived reliability of scientists, that can limit people's trust in science communicators and their subsequent acceptance of climate change claims. Little consideration has been given, however, to recent formal results within philosophy concerning the relationship between truth, the reliability of evidence sources, the coherence of multiple pieces of evidence/testimonies, and the impact of (non-)independence between sources of evidence. This study draws on these results to evaluate exactly what has (and, more important, has not yet) been established in the empirical literature about the factors that bias the public's reception of scientific communications about climate change. Copyright © 2015 Cognitive Science Society, Inc.
A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.
Pagoulatos, N; Haynor, D R; Kim, Y
2001-09-01
We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.
NASA Technical Reports Server (NTRS)
Steurer, W. H.
1980-01-01
A survey of all presently defined or proposed large space systems indicated an ever increasing demand for flexible components and materials, primarily as a result of the widening disparity between the stowage space of launch vehicles and the size of advanced systems. Typical flexible components and material requirements were identified on the basis of recurrence and/or functional commonality. This was followed by the evaluation of candidate materials and the search for material capabilities which promise to satisfy the postulated requirements. Particular attention was placed on thin films, and on the requirements of deployable antennas. The assessment of the performance of specific materials was based primarily on the failure mode, derived from a detailed failure analysis. In view of extensive on going work on thermal and environmental degradation effects, prime emphasis was placed on the assessment of the performance loss by meteoroid damage. Quantitative data were generated for tension members and antenna reflector materials. A methodology was developed for the representation of the overall materials performance as related to systems service life. A number of promising new concepts for flexible materials were identified.
Schulze, H Georg; Turner, Robin F B
2014-01-01
Charge-coupled device detectors are vulnerable to cosmic rays that can contaminate Raman spectra with positive going spikes. Because spikes can adversely affect spectral processing and data analyses, they must be removed. Although both hardware-based and software-based spike removal methods exist, they typically require parameter and threshold specification dependent on well-considered user input. Here, we present a fully automated spike removal algorithm that proceeds without requiring user input. It is minimally dependent on sample attributes, and those that are required (e.g., standard deviation of spectral noise) can be determined with other fully automated procedures. At the core of the method is the identification and location of spikes with coincident second derivatives along both the spectral and spatiotemporal dimensions of two-dimensional datasets. The method can be applied to spectra that are relatively inhomogeneous because it provides fairly effective and selective targeting of spikes resulting in minimal distortion of spectra. Relatively effective spike removal obtained with full automation could provide substantial benefits to users where large numbers of spectra must be processed.
Survey of U.S. Ancillary Services Markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Zhi; Levin, Todd; Conzelmann, Guenter
In addition to providing energy to end-consumers, power system operators are also responsible for ensuring system reliability. To this end, power markets maintain an array of ancillary services to ensure that it is always possible to balance the supply and demand for energy in real-time. A subset of these ancillary services are commonly procured through market-based mechanisms: namely, Regulation, Spinning, and Non-spinning Reserves. Regulation Reserves are maintained to respond to supply/demand imbalances over short time frames, typically on the order of several seconds to one minute. Resources that provide Regulation Reserves adjust their generation or load levels in response tomore » automatic generation control (AGC) signals provided by the system operator. Contingency reserves are maintained to provide additional generation capacity in the event that load increases substantially or supply side resources reduce their output or are taken offline. The reserves are typically segmented into two categories, 1) Spinning or Synchronized Reserves that are provided by generation units that are actively generating and have the ability to increase or decrease their output, 2) Non-spinning or Non-synchronized Reserves that are provided by generation resources that are not actively generating, but are able to start up and provide generation within a specified timeframe. Contingency reserves typically have response times on the order of ten to 30 minutes and can also be provided by demand-side resources that are capable of reducing their load. There are seven distinct power markets in the United States, each operated by a Regional Transmission Operator (RTO) or Independent System Operator (ISO) that operates the transmission system in its territory, operates markets for energy and ancillary services, and maintains system reliability. Each power market offers its own set of ancillary services, and precise definitions, requirements, and market mechanisms differ between markets. Despite the differences between markets, both in terms of services offered and system requirements, some broad trends generally apply. Regulation Reserves typically have the highest market prices, followed by Spinning Reserves and Non-spinning Reserves. Prices for Regulation Reserves have been the highest in the PJM market, since it opened in October 2012. This is partially because PJM experienced large price spikes during the period of extreme weather conditions in early 2014. ERCOT has traditionally had the highest prices for Spinning Reserves (called Responsive Reserves in ERCOT), including several periods of sustained high prices between 2010 and 2012. This can be explained in part by the relatively high penetration of variable wind resources and a similarly high requirement relative to peak load. ERCOT has also traditionally had the highest price for Non-spinning Reserves, followed by the NYISO East region. Both have experienced several periods of prolonged high prices since their inception, an occurrence that has not been regularly seen in other markets. In ISO-NE and PJM for example, the market clearing price for Non-spinning Reserves is typically $0/MWh more than 95% of the time. Market volume (in terms of the average amount of capacity of each service that is provided to a system) typically follows the reverse order of prices, as systems maintain the most Non-spinning Reserves capacity followed by Spinning Reserves and Regulation Reserves. PJM generally has the largest market for Regulation Reserves both in terms of capacity. The size of most Regulation Reserves markets in terms of capacity stay relatively constant year-to-year, as this is dictated largely by system requirements. PJM also generally has the largest Spinning Reserves market in terms of capacity. SPP, MISO, ISO-NE and SPP (beginning in 2014) all have Spinning Reserve markets with similar average capacity levels. When combined, the markets for Non-spinning and Operating reserves in ISO-NE have a comparable capacity to the market for Primary Reserves 1 in PJM. SPP, MISO, and CAISO all have smaller markets for their respective Non-spinning Reserves products that are roughly the same size as each other in terms of capacity.« less
Study for identification of Beneficial uses of Space (BUS). Volume 3: Appendices
NASA Technical Reports Server (NTRS)
1975-01-01
The quantification of required specimen(s) from space processing experiments, the typical EMI measurements and estimates of a typical RF source, and the integration of commercial payloads into spacelab were considered.
Ridge Waveguide Structures in Magnesium-Doped Lithium Niobate
NASA Technical Reports Server (NTRS)
Himmer, Phillip; Battle, Philip; Suckow, William; Switzer, Greg
2011-01-01
This work proposes to establish the feasibility of fabricating isolated ridge waveguides in 5% MgO:LN. Ridge waveguides in MgO:LN will significantly improve power handling and conversion efficiency, increase photonic component integration, and be well suited to spacebased applications. The key innovation in this effort is to combine recently available large, high-photorefractive-damage-threshold, z-cut 5% MgO:LN with novel ridge fabrication techniques to achieve high-optical power, low-cost, high-volume manufacturing of frequency conversion structures. The proposed ridge waveguide structure should maintain the characteristics of the periodically poled bulk substrate, allowing for the efficient frequency conversion typical of waveguides and the high optical damage threshold and long lifetimes typical of the 5% doped bulk substrate. The low cost and large area of 5% MgO:LN wafers, and the improved performance of the proposed ridge waveguide structure, will enhance existing measurement capabilities as well as reduce the resources required to achieve high-performance specifications. The purpose of the ridge waveguides in MgO:LN is to provide platform technology that will improve optical power handling and conversion efficiency compared to existing waveguide technology. The proposed ridge waveguide is produced using standard microfabrication techniques. The approach is enabled by recent advances in inductively coupled plasma etchers and chemical mechanical planarization techniques. In conjunction with wafer bonding, this fabrication methodology can be used to create arbitrarily shaped waveguides allowing complex optical circuits to be engineered in nonlinear optical materials such as magnesium doped lithium niobate. Researchers here have identified NLO (nonlinear optical) ridge waveguide structures as having suitable value to be the leading frequency conversion structures. Its value is based on having the low-cost fabrication necessary to satisfy the challenging pricing requirements as well as achieve the power handling and other specifications in a suitably compact package.
Clickers in the large classroom: current research and best-practice tips.
Caldwell, Jane E
2007-01-01
Audience response systems (ARS) or clickers, as they are commonly called, offer a management tool for engaging students in the large classroom. Basic elements of the technology are discussed. These systems have been used in a variety of fields and at all levels of education. Typical goals of ARS questions are discussed, as well as methods of compensating for the reduction in lecture time that typically results from their use. Examples of ARS use occur throughout the literature and often detail positive attitudes from both students and instructors, although exceptions do exist. When used in classes, ARS clickers typically have either a benign or positive effect on student performance on exams, depending on the method and extent of their use, and create a more positive and active atmosphere in the large classroom. These systems are especially valuable as a means of introducing and monitoring peer learning methods in the large lecture classroom. So that the reader may use clickers effectively in his or her own classroom, a set of guidelines for writing good questions and a list of best-practice tips have been culled from the literature and experienced users.
Clickers in the Large Classroom: Current Research and Best-Practice Tips
2007-01-01
Audience response systems (ARS) or clickers, as they are commonly called, offer a management tool for engaging students in the large classroom. Basic elements of the technology are discussed. These systems have been used in a variety of fields and at all levels of education. Typical goals of ARS questions are discussed, as well as methods of compensating for the reduction in lecture time that typically results from their use. Examples of ARS use occur throughout the literature and often detail positive attitudes from both students and instructors, although exceptions do exist. When used in classes, ARS clickers typically have either a benign or positive effect on student performance on exams, depending on the method and extent of their use, and create a more positive and active atmosphere in the large classroom. These systems are especially valuable as a means of introducing and monitoring peer learning methods in the large lecture classroom. So that the reader may use clickers effectively in his or her own classroom, a set of guidelines for writing good questions and a list of best-practice tips have been culled from the literature and experienced users. PMID:17339389
Food appropriation through large scale land acquisitions
NASA Astrophysics Data System (ADS)
Rulli, Maria Cristina; D'Odorico, Paolo
2014-05-01
The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.
Searching for missing heritability: Designing rare variant association studies
Zuk, Or; Schaffner, Stephen F.; Samocha, Kaitlin; Do, Ron; Hechter, Eliana; Kathiresan, Sekar; Daly, Mark J.; Neale, Benjamin M.; Sunyaev, Shamil R.; Lander, Eric S.
2014-01-01
Genetic studies have revealed thousands of loci predisposing to hundreds of human diseases and traits, revealing important biological pathways and defining novel therapeutic hypotheses. However, the genes discovered to date typically explain less than half of the apparent heritability. Because efforts have largely focused on common genetic variants, one hypothesis is that much of the missing heritability is due to rare genetic variants. Studies of common variants are typically referred to as genomewide association studies, whereas studies of rare variants are often simply called sequencing studies. Because they are actually closely related, we use the terms common variant association study (CVAS) and rare variant association study (RVAS). In this paper, we outline the similarities and differences between RVAS and CVAS and describe a conceptual framework for the design of RVAS. We apply the framework to address key questions about the sample sizes needed to detect association, the relative merits of testing disruptive alleles vs. missense alleles, frequency thresholds for filtering alleles, the value of predictors of the functional impact of missense alleles, the potential utility of isolated populations, the value of gene-set analysis, and the utility of de novo mutations. The optimal design depends critically on the selection coefficient against deleterious alleles and thus varies across genes. The analysis shows that common variant and rare variant studies require similarly large sample collections. In particular, a well-powered RVAS should involve discovery sets with at least 25,000 cases, together with a substantial replication set. PMID:24443550
Mitigating Structural Defects in Droop-Minimizing InGaN/GaN Quantum Well Heterostructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Zhibo; Chesin, Jordan; Singh, Akshay
2016-12-01
Modern commercial InGaN/GaN blue LEDs continue to suffer from efficiency droop, a reduction in efficiency with increasing drive current. External quantum efficiency (EQE) typically peaks at low drive currents (< 10 A cm 2) and drops monotonically at higher current densities, falling to <85% of the peak EQE at a drive current of 100 A cm 2. Mitigating droop-related losses will yield tremendous gains in both luminous efficacy (lumens/W) and cost (lumens/$). Such improvements are critical for continued large-scale market penetration of LED technologies, particularly in high-power and high flux per unit area applications. However, device structures that reduce droopmore » typically require higher indium content and are accompanied by a corresponding degradation in material quality which negates the droop improvement via enhanced Shockley-Read-Hall (SRH) recombination. In this work, we use advanced characterization techniques to identify and classify structural defects in InGaN/GaN quantum well (QW) heterostructures that share features with low-droop designs. Using aberration-corrected scanning transmission electron microscopy (C s-STEM), we find the presence of severe well width fluctuations (WWFs) in a number of low droop device architectures. However, the presence of WWFs does not correlate strongly with external quantum efficiency nor defect densities measured via deep level optical spectroscopy (DLOS). Hence, performance losses in the heterostructures of interest are likely dominated by nanoscale point or interfacial defects rather than large-scale extended defects.« less
Noise-induced phase space transport in two-dimensional Hamiltonian systems.
Pogorelov, I V; Kandrup, H E
1999-08-01
First passage time experiments were used to explore the effects of low amplitude noise as a source of accelerated phase space diffusion in two-dimensional Hamiltonian systems, and these effects were then compared with the effects of periodic driving. The objective was to quantify and understand the manner in which "sticky" chaotic orbits that, in the absence of perturbations, are confined near regular islands for very long times, can become "unstuck" much more quickly when subjected to even very weak perturbations. For both noise and periodic driving, the typical escape time scales logarithmically with the amplitude of the perturbation. For white noise, the details seem unimportant: Additive and multiplicative noise typically have very similar effects, and the presence or absence of a friction related to the noise by a fluctuation-dissipation theorem is also largely irrelevant. Allowing for colored noise can significantly decrease the efficacy of the perturbation, but only when the autocorrelation time, which vanishes for white noise, becomes so large that there is little power at frequencies comparable to the natural frequencies of the unperturbed orbit. Similarly, periodic driving is relatively inefficient when the driving frequency is not comparable to these natural frequencies. This suggests that noise-induced extrinsic diffusion, like modulational diffusion associated with periodic driving, is a resonance phenomenon. The logarithmic dependence of the escape time on amplitude reflects the fact that the time required for perturbed and unperturbed orbits to diverge a given distance scales logarithmically in the amplitude of the perturbation.
An Overlooked Source of Auroral Arc Field-Aligned Current
NASA Astrophysics Data System (ADS)
Knudsen, D. J.
2017-12-01
The search for the elusive generator of quiet auroral arcs often focuses on magnetospheric pressure gradients, based on the static terms in the so-called Vaslyiunas equation [Vasyliunas, in "Magneospheric Currents", Geophysical Monograph 28, 1984]. However, magnetospheric pressure gradient scale sizes are much larger than the width of individual auroral arcs. This discrepancy was noted by Atkinson [JGR, 27, p4746, 1970], who proposed that the auroral arcs are fed instead by steady-state polarization currents, in which large-scale convection across quasi-static electric field structures leads to an apparent time dependence in the frame co-moving with the plasma, and therefore to the generation of ion polarization currents. This mechanism has been adopted by a series of authors over several decades, relating to studies of the ionospheric feedback instability, or IFI. However, the steady-state polarization current mechanism does not require the IFI, nor even the ionsophere. Specifically, any quasi-static electric field structure that is stationary relative to large-scale plasma convection is subject to the generation this current. This talk demonstrates that assumed convection speeds of the order of a 100 m/s across typical arc fields structures can lead to the generation FAC magintudes of several μA/m2, typical of values observed at the ionospheric footpoint of auoral arcs. This current can be viewed as originating within the M-I coupling medium, along the entire field line connecting an auroral arc to its root in the magnetosphere.
Maintaining and Enhancing Diversity of Sampled Protein Conformations in Robotics-Inspired Methods.
Abella, Jayvee R; Moll, Mark; Kavraki, Lydia E
2018-01-01
The ability to efficiently sample structurally diverse protein conformations allows one to gain a high-level view of a protein's energy landscape. Algorithms from robot motion planning have been used for conformational sampling, and several of these algorithms promote diversity by keeping track of "coverage" in conformational space based on the local sampling density. However, large proteins present special challenges. In particular, larger systems require running many concurrent instances of these algorithms, but these algorithms can quickly become memory intensive because they typically keep previously sampled conformations in memory to maintain coverage estimates. In addition, robotics-inspired algorithms depend on defining useful perturbation strategies for exploring the conformational space, which is a difficult task for large proteins because such systems are typically more constrained and exhibit complex motions. In this article, we introduce two methodologies for maintaining and enhancing diversity in robotics-inspired conformational sampling. The first method addresses algorithms based on coverage estimates and leverages the use of a low-dimensional projection to define a global coverage grid that maintains coverage across concurrent runs of sampling. The second method is an automatic definition of a perturbation strategy through readily available flexibility information derived from B-factors, secondary structure, and rigidity analysis. Our results show a significant increase in the diversity of the conformations sampled for proteins consisting of up to 500 residues when applied to a specific robotics-inspired algorithm for conformational sampling. The methodologies presented in this article may be vital components for the scalability of robotics-inspired approaches.
NASA Astrophysics Data System (ADS)
Sargent, S.; Somers, J. M.
2015-12-01
Trace-gas eddy covariance flux measurement can be made with open-path or closed-path analyzers. Traditional closed-path trace-gas analyzers use multipass absorption cells that behave as mixing volumes, requiring high sample flow rates to achieve useful frequency response. The high sample flow rate and the need to keep the multipass cell extremely clean dictates the use of a fine-pore filter that may clog quickly. A large-capacity filter cannot be used because it would degrade the EC system frequency response. The high flow rate also requires a powerful vacuum pump, which will typically consume on the order of 1000 W. The analyzer must measure water vapor for spectroscopic and dilution corrections. Open-path analyzers are available for methane, but not for nitrous oxide. The currently available methane analyzers have low power consumption, but are very large. Their large size degrades frequency response and disturbs the air flow near the sonic anemometer. They require significant maintenance to keep the exposed multipass optical surfaces clean. Water vapor measurements for dilution and spectroscopic corrections require a separate water vapor analyzer. A new closed-path eddy covariance system for measuring nitrous oxide or methane fluxes provides an elegant solution. The analyzer (TGA200A, Campbell Scientific, Inc.) uses a thermoelectrically-cooled interband cascade laser. Its small sample-cell volume and unique sample-cell configuration (200 ml, 1.5 m single pass) provide excellent frequency response with a low-power scroll pump (240 W). A new single-tube Nafion® dryer removes most of the water vapor, and attenuates fluctuations in the residual water vapor. Finally, a vortex intake assembly eliminates the need for an intake filter without adding volume that would degrade system frequency response. Laboratory testing shows the system attenuates the water vapor dilution term by more than 99% and achieves a half-power band width of 3.5 Hz.
A system level model for preliminary design of a space propulsion solid rocket motor
NASA Astrophysics Data System (ADS)
Schumacher, Daniel M.
Preliminary design of space propulsion solid rocket motors entails a combination of components and subsystems. Expert design tools exist to find near optimal performance of subsystems and components. Conversely, there is no system level preliminary design process for space propulsion solid rocket motors that is capable of synthesizing customer requirements into a high utility design for the customer. The preliminary design process for space propulsion solid rocket motors typically builds on existing designs and pursues feasible rather than the most favorable design. Classical optimization is an extremely challenging method when dealing with the complex behavior of an integrated system. The complexity and combinations of system configurations make the number of the design parameters that are traded off unreasonable when manual techniques are used. Existing multi-disciplinary optimization approaches generally address estimating ratios and correlations rather than utilizing mathematical models. The developed system level model utilizes the Genetic Algorithm to perform the necessary population searches to efficiently replace the human iterations required during a typical solid rocket motor preliminary design. This research augments, automates, and increases the fidelity of the existing preliminary design process for space propulsion solid rocket motors. The system level aspect of this preliminary design process, and the ability to synthesize space propulsion solid rocket motor requirements into a near optimal design, is achievable. The process of developing the motor performance estimate and the system level model of a space propulsion solid rocket motor is described in detail. The results of this research indicate that the model is valid for use and able to manage a very large number of variable inputs and constraints towards the pursuit of the best possible design.
Image Harvest: an open-source platform for high-throughput plant image processing and analysis.
Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal
2016-05-01
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.
Influence of speckle image reconstruction on photometric precision for large solar telescopes
NASA Astrophysics Data System (ADS)
Peck, C. L.; Wöger, F.; Marino, J.
2017-11-01
Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiszpanski, Anna M.
Metamaterials are composites with patterned subwavelength features where the choice of materials and subwavelength structuring bestows upon the metamaterials unique optical properties not found in nature, thereby enabling optical applications previously considered impossible. However, because the structure of optical metamaterials must be subwavelength, metamaterials operating at visible wavelengths require features on the order of 100 nm or smaller, and such resolution typically requires top-down lithographic fabrication techniques that are not easily scaled to device-relevant areas that are square centimeters in size. In this project, we developed a new fabrication route using block copolymers to make over large device-relevant areas opticalmore » metamaterials that operate at visible wavelengths. Our structures are smaller in size (sub-100 nm) and cover a larger area (cm 2) than what has been achieved with traditional nanofabrication routes. To guide our experimental efforts, we developed an algorithm to calculate the expected optical properties (specifically the index of refraction) of such metamaterials that predicts that we can achieve surprisingly large changes in optical properties with small changes in metamaterials’ structure. In the course of our work, we also found that the ordered metal nanowires meshes produced by our scalable fabrication route for making optical metamaterials may also possibly act as transparent electrodes, which are needed in electrical displays and solar cells. We explored the ordered metal nanowires meshes’ utility for this application and developed design guidelines to aide our experimental efforts.« less
Deep Unsupervised Learning on a Desktop PC: A Primer for Cognitive Scientists.
Testolin, Alberto; Stoianov, Ivilin; De Filippo De Grazia, Michele; Zorzi, Marco
2013-01-01
Deep belief networks hold great promise for the simulation of human cognition because they show how structured and abstract representations may emerge from probabilistic unsupervised learning. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. However, learning in deep networks typically requires big datasets and it can involve millions of connection weights, which implies that simulations on standard computers are unfeasible. Developing realistic, medium-to-large-scale learning models of cognition would therefore seem to require expertise in programing parallel-computing hardware, and this might explain why the use of this promising approach is still largely confined to the machine learning community. Here we show how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low cost graphic cards (graphic processor units) without any specific programing effort, thanks to the use of high-level programming routines (available in MATLAB or Python). We also show that even an entry-level graphic card can outperform a small high-performance computing cluster in terms of learning time and with no loss of learning quality. We therefore conclude that graphic card implementations pave the way for a widespread use of deep learning among cognitive scientists for modeling cognition and behavior.
Deep Unsupervised Learning on a Desktop PC: A Primer for Cognitive Scientists
Testolin, Alberto; Stoianov, Ivilin; De Filippo De Grazia, Michele; Zorzi, Marco
2013-01-01
Deep belief networks hold great promise for the simulation of human cognition because they show how structured and abstract representations may emerge from probabilistic unsupervised learning. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. However, learning in deep networks typically requires big datasets and it can involve millions of connection weights, which implies that simulations on standard computers are unfeasible. Developing realistic, medium-to-large-scale learning models of cognition would therefore seem to require expertise in programing parallel-computing hardware, and this might explain why the use of this promising approach is still largely confined to the machine learning community. Here we show how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low cost graphic cards (graphic processor units) without any specific programing effort, thanks to the use of high-level programming routines (available in MATLAB or Python). We also show that even an entry-level graphic card can outperform a small high-performance computing cluster in terms of learning time and with no loss of learning quality. We therefore conclude that graphic card implementations pave the way for a widespread use of deep learning among cognitive scientists for modeling cognition and behavior. PMID:23653617
Strategies for Radiation Hardness Testing of Power Semiconductor Devices
NASA Technical Reports Server (NTRS)
Soltis, James V. (Technical Monitor); Patton, Martin O.; Harris, Richard D.; Rohal, Robert G.; Blue, Thomas E.; Kauffman, Andrew C.; Frasca, Albert J.
2005-01-01
Plans on the drawing board for future space missions call for much larger power systems than have been flown in the past. These systems would employ much higher voltages and currents to enable more powerful electric propulsion engines and other improvements on what will also be much larger spacecraft. Long term human outposts on the moon and planets would also require high voltage, high current and long life power sources. Only hundreds of watts are produced and controlled on a typical robotic exploration spacecraft today. Megawatt systems are required for tomorrow. Semiconductor devices used to control and convert electrical energy in large space power systems will be exposed to electromagnetic and particle radiation of many types, depending on the trajectory and duration of the mission and on the power source. It is necessary to understand the often very different effects of the radiations on the control and conversion systems. Power semiconductor test strategies that we have developed and employed will be presented, along with selected results. The early results that we have obtained in testing large power semiconductor devices give a good indication of the degradation in electrical performance that can be expected in response to a given dose. We are also able to highlight differences in radiation hardness that may be device or material specific.
Aero-Propulsion Technology (APT) Task V Low Noise ADP Engine Definition Study
NASA Technical Reports Server (NTRS)
Holcombe, V.
2003-01-01
A study was conducted to identify and evaluate noise reduction technologies for advanced ducted prop propulsion systems that would allow increased capacity operation and result in an economically competitive commercial transport. The study investigated the aero/acoustic/structural advancements in fan and nacelle technology required to match or exceed the fuel burned and economic benefits of a constrained diameter large Advanced Ducted Propeller (ADP) compared to an unconstrained ADP propulsion system with a noise goal of 5 to 10 EPNDB reduction relative to FAR 36 Stage 3 at each of the three measuring stations namely, takeoff (cutback), approach and sideline. A second generation ADP was selected to operate within the maximum nacelle diameter constrain of 160 deg to allow installation under the wing. The impact of fan and nacelle technologies of the second generation ADP on fuel burn and direct operating costs for a typical 3000 nm mission was evaluated through use of a large, twin engine commercial airplane simulation model. The major emphasis of this study focused on fan blade aero/acoustic and structural technology evaluations and advanced nacelle designs. Results of this study have identified the testing required to verify the interactive performance of these components, along with noise characteristics, by wind tunnel testing utilizing and advanced interaction rig.
Single-frequency 3D synthetic aperture imaging with dynamic metasurface antennas.
Boyarsky, Michael; Sleasman, Timothy; Pulido-Mancera, Laura; Diebold, Aaron V; Imani, Mohammadreza F; Smith, David R
2018-05-20
Through aperture synthesis, an electrically small antenna can be used to form a high-resolution imaging system capable of reconstructing three-dimensional (3D) scenes. However, the large spectral bandwidth typically required in synthetic aperture radar systems to resolve objects in range often requires costly and complex RF components. We present here an alternative approach based on a hybrid imaging system that combines a dynamically reconfigurable aperture with synthetic aperture techniques, demonstrating the capability to resolve objects in three dimensions (3D), with measurements taken at a single frequency. At the core of our imaging system are two metasurface apertures, both of which consist of a linear array of metamaterial irises that couple to a common waveguide feed. Each metamaterial iris has integrated within it a diode that can be biased so as to switch the element on (radiating) or off (non-radiating), such that the metasurface antenna can produce distinct radiation profiles corresponding to different on/off patterns of the metamaterial element array. The electrically large size of the metasurface apertures enables resolution in range and one cross-range dimension, while aperture synthesis provides resolution in the other cross-range dimension. The demonstrated imaging capabilities of this system represent a step forward in the development of low-cost, high-performance 3D microwave imaging systems.
Mohammed, Yassene; Percy, Andrew J; Chambers, Andrew G; Borchers, Christoph H
2015-02-06
Multiplexed targeted quantitative proteomics typically utilizes multiple reaction monitoring and allows the optimized quantification of a large number of proteins. One challenge, however, is the large amount of data that needs to be reviewed, analyzed, and interpreted. Different vendors provide software for their instruments, which determine the recorded responses of the heavy and endogenous peptides and perform the response-curve integration. Bringing multiplexed data together and generating standard curves is often an off-line step accomplished, for example, with spreadsheet software. This can be laborious, as it requires determining the concentration levels that meet the required accuracy and precision criteria in an iterative process. We present here a computer program, Qualis-SIS, that generates standard curves from multiplexed MRM experiments and determines analyte concentrations in biological samples. Multiple level-removal algorithms and acceptance criteria for concentration levels are implemented. When used to apply the standard curve to new samples, the software flags each measurement according to its quality. From the user's perspective, the data processing is instantaneous due to the reactivity paradigm used, and the user can download the results of the stepwise calculations for further processing, if necessary. This allows for more consistent data analysis and can dramatically accelerate the downstream data analysis.
Cerebral energy metabolism and the brain's functional network architecture: an integrative review.
Lord, Louis-David; Expert, Paul; Huckins, Jeremy F; Turkheimer, Federico E
2013-09-01
Recent functional magnetic resonance imaging (fMRI) studies have emphasized the contributions of synchronized activity in distributed brain networks to cognitive processes in both health and disease. The brain's 'functional connectivity' is typically estimated from correlations in the activity time series of anatomically remote areas, and postulated to reflect information flow between neuronal populations. Although the topological properties of functional brain networks have been studied extensively, considerably less is known regarding the neurophysiological and biochemical factors underlying the temporal coordination of large neuronal ensembles. In this review, we highlight the critical contributions of high-frequency electrical oscillations in the γ-band (30 to 100 Hz) to the emergence of functional brain networks. After describing the neurobiological substrates of γ-band dynamics, we specifically discuss the elevated energy requirements of high-frequency neural oscillations, which represent a mechanistic link between the functional connectivity of brain regions and their respective metabolic demands. Experimental evidence is presented for the high oxygen and glucose consumption, and strong mitochondrial performance required to support rhythmic cortical activity in the γ-band. Finally, the implications of mitochondrial impairments and deficits in glucose metabolism for cognition and behavior are discussed in the context of neuropsychiatric and neurodegenerative syndromes characterized by large-scale changes in the organization of functional brain networks.
Conduction Cooling of a Niobium SRF Cavity Using a Cryocooler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feldman, Joshua; Geelhoed, Michael; Dhuley, Ram
Superconducting Radio Frequency (SRF) cavities are the primary choice for accelerating charged particles in high-energy research accelerators. Institutions like Fermilab use SRF cavities because they enable significantly higher gradients and quality factors than normal-conducting RF cavities and DC voltage cavities. To cool the SRF cavities to low temperatures (typically around 2 K), liquid helium refrigerators are used. Producing and maintaining the necessary liquid helium requires large, elaborate cryogenic plants involving dewars, compressors, expansion engines, and recyclers. The cost, complexity, and space required for such plants is part of the reason that industry has not yet adopted SRF-based accelerators. At themore » Illinois Accelerator Research Center (IARC) at Fermilab, our team seeks to make SRF technology accessible not only to large research accelerators, but to industry as well. If we eliminate the complexity associated with liquid helium plants, SRF-based industrial accelerators may finally become a reality. One way to do this is to eliminate the use of liquid helium baths altogether and develop a brand-new cooling technique for SRF cavities: conduction cooling using a cryocooler. Recent advances in SRF technology have made it possible to operate SRF cavities at 4 K, a temperature easily achievable using commercial cryocoolers. Our IARC team is taking advantage of this technology to cool SRF cavities.« less
Characterization of a Regenerable Impactor Filter for Spacecraft Cabin Applications
NASA Technical Reports Server (NTRS)
Agui, Juan H.; Vijayakumar, R.
2015-01-01
Regenerable filters will play an important role in human exploration beyond low-Earth orbit. Life Support Systems aboard crewed spacecrafts will have to operate reliably and with little maintenance over periods of more than a year, even multiple years. Air filters are a key component of spacecraft life support systems, but they often require frequent routine maintenance. Bacterial filters aboard the International Space Station require almost weekly cleaning of the pre-filter screen to remove large lint debris captured in the microgravity environment. The source of the airborne matter which is collected on the filter screen is typically from clothing fibers, biological matter (hair, skin, nails, etc.) and material wear. Clearly a need for low maintenance filters requiring little to no crew intervention will be vital to the success of the mission. An impactor filter is being developed and tested to address this need. This filter captures large particle matter through inertial separation and impaction methods on collection surfaces, which can be automatically cleaned after they become heavily loaded. The impactor filter can serve as a pre-filter to augment the life of higher efficiency filters that capture fine and ultrafine particles. A prototype of the filter is being tested at the Particulate Filtration Laboratory at NASA Glenn Research Center to determine performance characteristics, including particle cut size and overall efficiency. Model results are presented for the flow characteristics near the orifice plate through which the particle-laden flow is accelerated as well as around the collection bands.
Orbiter Kapton wire operational requirements and experience
NASA Technical Reports Server (NTRS)
Peterson, R. V.
1994-01-01
The agenda of this presentation includes the Orbiter wire selection requirements, the Orbiter wire usage, fabrication and test requirements, typical wiring installations, Kapton wire experience, NASA Kapton wire testing, summary, and backup data.
On Heating Large Bright Coronal Loops by Magnetic Microexplosions at their Feet
NASA Technical Reports Server (NTRS)
Moore, Ronald L; Falconer, D. A.; Porter, Jason G.
1999-01-01
In previous work, by registering Yohkoh SXT coronal X-ray images with MSFC vector magnetograms, we found that: (1) many of the larger bright coronal loops rooted at one or both ends in an active region are rooted around magnetic islands of included polarity, (2) the core field encasing the neutral line encircling the island is strongly sheared, and (3) this sheared core field is the seat of frequent microflares. This suggests that the coronal heating in these extended bright loops is driven by many small explosive releases of stored magnetic energy from the sheared core field at their feet, some of which magnetic microexplosions also produce the microflare heating in the core fields. In this paper, we show that this scenario is feasible in terms of the energy Abstract: required for the observed coronal heating and the magnetic energy available in the observed sheared core fields. In a representative active region, from the X-ray and vector field data, we estimate the coronal heating consumption by a selected typical large bright loop, the coronal heating consumption by a typical microflare at the foot of this loop, the frequency of microflares at the foot, and the available magnetic energy in the microflaring core field. We find that: (1) the rate of magnetic energy release to power the microflares at the foot (approx. 6 x 10(ext 25)erg/s) is enough to also power the coronal heating in the body of the extended loop (approx. 2 x l0(exp 25 erg/s), and (2) there is enough stored magnetic energy in the sheared core field to sustain the microflaring and extended loop heating for about a day, which is a typical time for buildup of neutral-line magnetic shear in an active region. This work was funded by the Solar Physics Branch of NASA's Office of Space Science through the SR&T Program and the SEC Guest Investigator Program.
NASA Astrophysics Data System (ADS)
Bouchet, F.; Laurie, J.; Zaboronski, O.
2012-12-01
We describe transitions between attractors with either one, two or more zonal jets in models of turbulent atmosphere dynamics. Those transitions are extremely rare, and occur over times scales of centuries or millennia. They are extremely hard to observe in direct numerical simulations, because they require on one hand an extremely good resolution in order to simulate accurately the turbulence and on the other hand simulations performed over an extremely long time. Those conditions are usually not met together in any realistic models. However many examples of transitions between turbulent attractors in geophysical flows are known to exist (paths of the Kuroshio, Earth's magnetic field reversal, atmospheric flows, and so on). Their study through numerical computations is inaccessible using conventional means. We present an alternative approach, based on instanton theory and large deviations. Instanton theory provides a way to compute (both numerically and theoretically) extremely rare transitions between turbulent attractors. This tool, developed in field theory, and justified in some cases through the large deviation theory in mathematics, can be applied to models of turbulent atmosphere dynamics. It provides both new theoretical insights and new type of numerical algorithms. Those algorithms can predict transition histories and transition rates using numerical simulations run over only hundreds of typical model dynamical time, which is several order of magnitude lower than the typical transition time. We illustrate the power of those tools in the framework of quasi-geostrophic models. We show regimes where two or more attractors coexist. Those attractors corresponds to turbulent flows dominated by either one or more zonal jets similar to midlatitude atmosphere jets. Among the trajectories connecting two non-equilibrium attractors, we determine the most probable ones. Moreover, we also determine the transition rates, which are several of magnitude larger than a typical time determined from the jet structure. We discuss the medium-term generalization of those results to models with more complexity, like primitive equations or GCMs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Tumeo, Antonino; Ferrandi, Fabrizio
Emerging applications such as data mining, bioinformatics, knowledge discovery, social network analysis are irregular. They use data structures based on pointers or linked lists, such as graphs, unbalanced trees or unstructures grids, which generates unpredictable memory accesses. These data structures usually are large, but difficult to partition. These applications mostly are memory bandwidth bounded and have high synchronization intensity. However, they also have large amounts of inherent dynamic parallelism, because they potentially perform a task for each one of the element they are exploring. Several efforts are looking at accelerating these applications on hybrid architectures, which integrate general purpose processorsmore » with reconfigurable devices. Some solutions, which demonstrated significant speedups, include custom-hand tuned accelerators or even full processor architectures on the reconfigurable logic. In this paper we present an approach for the automatic synthesis of accelerators from C, targeted at irregular applications. In contrast to typical High Level Synthesis paradigms, which construct a centralized Finite State Machine, our approach generates dynamically scheduled hardware components. While parallelism exploitation in typical HLS-generated accelerators is usually bound within a single execution flow, our solution allows concurrently running multiple execution flow, thus also exploiting the coarser grain task parallelism of irregular applications. Our approach supports multiple, multi-ported and distributed memories, and atomic memory operations. Its main objective is parallelizing as many memory operations as possible, independently from their execution time, to maximize the memory bandwidth utilization. This significantly differs from current HLS flows, which usually consider a single memory port and require precise scheduling of memory operations. A key innovation of our approach is the generation of a memory interface controller, which dynamically maps concurrent memory accesses to multiple ports. We present a case study on a typical irregular kernel, Graph Breadth First search (BFS), exploring different tradeoffs in terms of parallelism and number of memories.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Font-Ribera, Andreu; Miralda-Escudé, Jordi; Arnau, Eduard
2012-11-01
We present the first measurement of the large-scale cross-correlation of Lyα forest absorption and Damped Lyman α systems (DLA), using the 9th Data Release of the Baryon Oscillation Spectroscopic Survey (BOSS). The cross-correlation is clearly detected on scales up to 40h{sup −1}Mpc and is well fitted by the linear theory prediction of the standard Cold Dark Matter model of structure formation with the expected redshift distortions, confirming its origin in the gravitational evolution of structure. The amplitude of the DLA-Lyα cross-correlation depends on only one free parameter, the bias factor of the DLA systems, once the Lyα forest bias factorsmore » are known from independent Lyα forest correlation measurements. We measure the DLA bias factor to be b{sub D} = (2.17±0.20)β{sub F}{sup 0.22}, where the Lyα forest redshift distortion parameter β{sub F} is expected to be above unity. This bias factor implies a typical host halo mass for DLAs that is much larger than expected in present DLA models, and is reproduced if the DLA cross section scales with halo mass as M{sub h}{sup α}, with α = 1.1±0.1 for β{sub F} = 1. Matching the observed DLA bias factor and rate of incidence requires that atomic gas remains extended in massive halos over larger areas than predicted in present simulations of galaxy formation, with typical DLA proper sizes larger than 20 kpc in host halos of masses ∼ 10{sup 12}M{sub ☉}. We infer that typical galaxies at z ≅ 2 to 3 are surrounded by systems of atomic clouds that are much more extended than the luminous parts of galaxies and contain ∼ 10% of the baryons in the host halo.« less
Generation of vector dissipative and conventional solitons in large normal dispersion regime.
Yun, Ling
2017-08-07
We report the generation of both polarization-locked vector dissipative soliton and group velocity-locked vector conventional soliton in a nanotube-mode-locked fiber ring laser with large normal dispersion, for the first time to our best knowledge. Depending on the polarization-depended extinction ratio of the fiber-based Lyot filter, the two types of vector solitons can be switched by simply tuning the polarization controller. In the case of low filter extinction ratio, the output vector dissipative soliton exhibits steep spectral edges and strong frequency chirp, which presents a typical pulse duration of ~23.4 ps, and can be further compressed to ~0.9 ps. In the contrastive case of high filter extinction ratio, the vector conventional soliton has clear Kelly sidebands with transform-limited pulse duration of ~1.8 ps. Our study provides a new and simple method to achieve two different vector soliton sources, which is attractive for potential applications requiring different pulse profiles.
Rapid epitaxy-free graphene synthesis on silicidated polycrystalline platinum
Babenko, Vitaliy; Murdock, Adrian T.; Koós, Antal A.; Britton, Jude; Crossley, Alison; Holdway, Philip; Moffat, Jonathan; Huang, Jian; Alexander-Webber, Jack A.; Nicholas, Robin J.; Grobert, Nicole
2015-01-01
Large-area synthesis of high-quality graphene by chemical vapour deposition on metallic substrates requires polishing or substrate grain enlargement followed by a lengthy growth period. Here we demonstrate a novel substrate processing method for facile synthesis of mm-sized, single-crystal graphene by coating polycrystalline platinum foils with a silicon-containing film. The film reacts with platinum on heating, resulting in the formation of a liquid platinum silicide layer that screens the platinum lattice and fills topographic defects. This reduces the dependence on the surface properties of the catalytic substrate, improving the crystallinity, uniformity and size of graphene domains. At elevated temperatures growth rates of more than an order of magnitude higher (120 μm min−1) than typically reported are achieved, allowing savings in costs for consumable materials, energy and time. This generic technique paves the way for using a whole new range of eutectic substrates for the large-area synthesis of 2D materials. PMID:26175062
Fault-tolerant bandwidth reservation strategies for data transfers in high-performance networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo, Liudong; Zhu, Michelle M.; Wu, Chase Q.
2016-11-22
Many next-generation e-science applications need fast and reliable transfer of large volumes of data with guaranteed performance, which is typically enabled by the bandwidth reservation service in high-performance networks. One prominent issue in such network environments with large footprints is that node and link failures are inevitable, hence potentially degrading the quality of data transfer. We consider two generic types of bandwidth reservation requests (BRRs) concerning data transfer reliability: (i) to achieve the highest data transfer reliability under a given data transfer deadline, and (ii) to achieve the earliest data transfer completion time while satisfying a given data transfer reliabilitymore » requirement. We propose two periodic bandwidth reservation algorithms with rigorous optimality proofs to optimize the scheduling of individual BRRs within BRR batches. The efficacy of the proposed algorithms is illustrated through extensive simulations in comparison with scheduling algorithms widely adopted in production networks in terms of various performance metrics.« less
On the impact of approximate computation in an analog DeSTIN architecture.
Young, Steven; Lu, Junjie; Holleman, Jeremy; Arel, Itamar
2014-05-01
Deep machine learning (DML) holds the potential to revolutionize machine learning by automating rich feature extraction, which has become the primary bottleneck of human engineering in pattern recognition systems. However, the heavy computational burden renders DML systems implemented on conventional digital processors impractical for large-scale problems. The highly parallel computations required to implement large-scale deep learning systems are well suited to custom hardware. Analog computation has demonstrated power efficiency advantages of multiple orders of magnitude relative to digital systems while performing nonideal computations. In this paper, we investigate typical error sources introduced by analog computational elements and their impact on system-level performance in DeSTIN--a compositional deep learning architecture. These inaccuracies are evaluated on a pattern classification benchmark, clearly demonstrating the robustness of the underlying algorithm to the errors introduced by analog computational elements. A clear understanding of the impacts of nonideal computations is necessary to fully exploit the efficiency of analog circuits.
NASA Astrophysics Data System (ADS)
Ryu, Inkeon; Kim, Daekeun
2018-04-01
A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.
Surface plasmon microscopy with low-cost metallic nanostructures for biosensing I
NASA Astrophysics Data System (ADS)
Lindquist, Nathan; Oh, Sang-Hyun; Otto, Lauren
2012-02-01
The field of plasmonics aims to manipulate light over dimensions smaller than the optical wavelength by exploiting surface plasmon resonances in metallic films. Typically, surface plasmons are excited by illuminating metallic nanostructures. For meaningful research in this exciting area, the fabrication of high-quality nanostructures is critical, and in an undergraduate setting, low-cost methods are desirable. Careful optical characterization of the metallic nanostructures is also required. Here, we present the use of novel, inexpensive nanofabrication techniques and the development of a customized surface plasmon microscopy setup for interdisciplinary undergraduate experiments in biosensing, surface-enhanced Raman spectroscopy, and surface plasmon imaging. A Bethel undergraduate student performs the nanofabrication in collaboration with the University of Minnesota. The rewards of mentoring undergraduate students in cooperation with a large research university are numerous, exposing them to a wide variety of opportunities. This research also interacts with upper-level, open-ended laboratory projects, summer research, a semester-long senior research experience, and will enable a large range of experiments into the future.
Evaluation of metal-foil strain gages for cryogenic application in magnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freynik, H.S. Jr.; Roach, D.R.; Deis, D.W.
1977-07-08
The requirement for the design and construction of large superconducting magnet systems for fusion research has raised a number of new questions regarding the properties of composite superconducting conductors. One of these, the effect of mechanical stress on the current-carrying capacity of Nb/sub 3/Sn, is of major importance in determining the feasibility of constructing large magnets with this material. A typical experiment for determining such data involves the measurement of critical current versus magnetic field while the conductor is being mechanically strained to various degrees. Techniques are well developed for the current and field measurements, but much less so formore » the accurate measurement of strain at liquid-helium temperature in a high magnetic field. A study was made of commercial, metal-foil strain gages for use under these conditions. The information developed can also be applied to the use of strain gages as diagnostic tools in superconducting magnets.« less
Scalable electrophysiology in intact small animals with nanoscale suspended electrode arrays
NASA Astrophysics Data System (ADS)
Gonzales, Daniel L.; Badhiwala, Krishna N.; Vercosa, Daniel G.; Avants, Benjamin W.; Liu, Zheng; Zhong, Weiwei; Robinson, Jacob T.
2017-07-01
Electrical measurements from large populations of animals would help reveal fundamental properties of the nervous system and neurological diseases. Small invertebrates are ideal for these large-scale studies; however, patch-clamp electrophysiology in microscopic animals typically requires invasive dissections and is low-throughput. To overcome these limitations, we present nano-SPEARs: suspended electrodes integrated into a scalable microfluidic device. Using this technology, we have made the first extracellular recordings of body-wall muscle electrophysiology inside an intact roundworm, Caenorhabditis elegans. We can also use nano-SPEARs to record from multiple animals in parallel and even from other species, such as Hydra littoralis. Furthermore, we use nano-SPEARs to establish the first electrophysiological phenotypes for C. elegans models for amyotrophic lateral sclerosis and Parkinson's disease, and show a partial rescue of the Parkinson's phenotype through drug treatment. These results demonstrate that nano-SPEARs provide the core technology for microchips that enable scalable, in vivo studies of neurobiology and neurological diseases.
Variable Stars in the Field of V729 Aql
NASA Astrophysics Data System (ADS)
Cagaš, P.
2017-04-01
Wide field instruments can be used to acquire light curves of tens or even hundreds of variable stars per night, which increases the probability of new discoveries of interesting variable stars and generally increases the efficiency of observations. At the same time, wide field instruments produce a large amount of data, which must be processed using advanced software. The traditional approach, typically used by amateur astronomers, requires an unacceptable amount of time needed to process each data set. New functionality, built into SIPS software package, can shorten the time needed to obtain light curves by several orders of magnitude. Also, newly introduced SILICUPS software is intended for post-processing of stored light curves. It can be used to visualize observations from many nights, to find variable star periods, evaluate types of variability, etc. This work provides an overview of tools used to process data from the large field of view around the variable star V729 Aql. and demonstrates the results.
Barriers and opportunities for passive removal of indoor ozone
NASA Astrophysics Data System (ADS)
Gall, Elliott T.; Corsi, Richard L.; Siegel, Jeffrey A.
2011-06-01
This paper presents a Monte Carlo simulation to assess passive removal materials (PRMs) that remove ozone with no additional energy input and minimal byproduct formation. Distributions for air exchange rate in a subset of homes in Houston, Texas, were taken from the literature and combined with background ozone removal rates in typical houses and previous experimentally determined ozone deposition velocities to activated carbon cloth and gypsum wallboard PRMs. The median ratio of indoor to outdoor ozone was predicted to be 0.16 for homes with no PRMs installed and ranged from 0.047 to 0.12 for homes with PRMs. Median values of ozone removal effectiveness in these homes ranged from 22% to 68% for the conditions investigated. Achieving an ozone removal effectiveness above 50% in half of the homes would require installing a large area of PRMs and providing enhanced air speed to transport pollutants to PRM surfaces. Challenges associated with achieving this removal include optimizing indoor transport and aesthetic implications of large surface areas of PRM materials.
Current Approaches to Bone Tissue Engineering: The Interface between Biology and Engineering.
Li, Jiao Jiao; Ebied, Mohamed; Xu, Jen; Zreiqat, Hala
2018-03-01
The successful regeneration of bone tissue to replace areas of bone loss in large defects or at load-bearing sites remains a significant clinical challenge. Over the past few decades, major progress is achieved in the field of bone tissue engineering to provide alternative therapies, particularly through approaches that are at the interface of biology and engineering. To satisfy the diverse regenerative requirements of bone tissue, the field moves toward highly integrated approaches incorporating the knowledge and techniques from multiple disciplines, and typically involves the use of biomaterials as an essential element for supporting or inducing bone regeneration. This review summarizes the types of approaches currently used in bone tissue engineering, beginning with those primarily based on biology or engineering, and moving into integrated approaches in the areas of biomaterial developments, biomimetic design, and scalable methods for treating large or load-bearing bone defects, while highlighting potential areas for collaboration and providing an outlook on future developments. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Biodiesel sensing using silicon-on-insulator technologies
NASA Astrophysics Data System (ADS)
Casas Bedoya, Alvaro; Ling, Meng Y.; Brouckaert, Joost; Yebo, Nebiyu A.; Van Thourhout, Dries; Baets, Roel G.
2009-05-01
By measuring the transmission of Biodiesel/Diesel mixtures in the near- and far-infrared wavelength ranges, it is possible to predict the blend level with a high accuracy. Conventional photospectrometers are typically large and expensive and have a performance that often exceeds the requirements for most applications. For automotive applications for example, what counts is size, robustness and most important cost. As a result the miniaturization of the spectrometer can be seen as an attractive implementation of a Biodiesel sensor. Using Silicon-on-Insulator (SOI) this spectrometer miniaturization can be achieved. Due to the large refractive index contrast of the SOI material system, photonic devices can be made very compact. Moreover, they can be manufactured on high-quality SOI substrates using waferscale CMOS fabrication tools, making them cheap for the market. In this paper, we show that it is possible to determine Biodiesel blend levels using an SOI spectrometer-on-a-chip. We demonstrate absorption measurements using spiral shaped waveguides and we also present the spectrometer design for on-chip Biodiesel blend level measurements.
Speeding up 3D speckle tracking using PatchMatch
NASA Astrophysics Data System (ADS)
Zontak, Maria; O'Donnell, Matthew
2016-03-01
Echocardiography provides valuable information to diagnose heart dysfunction. A typical exam records several minutes of real-time cardiac images. To enable complete analysis of 3D cardiac strains, 4-D (3-D+t) echocardiography is used. This results in a huge dataset and requires effective automated analysis. Ultrasound speckle tracking is an effective method for tissue motion analysis. It involves correlation of a 3D kernel (block) around a voxel with kernels in later frames. The search region is usually confined to a local neighborhood, due to biomechanical and computational constraints. For high strains and moderate frame-rates, however, this search region will remain large, leading to a considerable computational burden. Moreover, speckle decorrelation (due to high strains) leads to errors in tracking. To solve this, spatial motion coherency between adjacent voxels should be imposed, e.g., by averaging their correlation functions.1 This requires storing correlation functions for neighboring voxels, thus increasing memory demands. In this work, we propose an efficient search using PatchMatch, 2 a powerful method to find correspondences between images. Here we adopt PatchMatch for 3D volumes and radio-frequency signals. As opposed to an exact search, PatchMatch performs random sampling of the search region and propagates successive matches among neighboring voxels. We show that: 1) Inherently smooth offset propagation in PatchMatch contributes to spatial motion coherence without any additional processing or memory demand. 2) For typical scenarios, PatchMatch is at least 20 times faster than the exact search, while maintaining comparable tracking accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, A
2000-12-28
This is an informal report on preliminary efforts to investigate earthquake focal mechanisms and earth structure in the Anatolian (Turkish) Plateau. Seismic velocity structure of the crust and upper mantle and earthquake focal parameters for event in the Anatolian Plateau are estimated from complete regional waveforms. Focal mechanisms, depths and seismic moments of moderately large crustal events are inferred from long-period (40-100 seconds) waveforms and compared with focal parameters derived from global teleseismic data. Using shorter periods (10-100 seconds) we estimate the shear and compressional velocity structure of the crust and uppermost mantle. Results are broadly consistent with previous studiesmore » and imply relatively little crustal thickening beneath the central Anatolian Plateau. Crustal thickness is about 35 km in western Anatolia and greater than 40 km in eastern Anatolia, however the long regional paths require considerable averaging and limit resolution. Crustal velocities are lower than typical continental averages, and even lower than typical active orogens. The mantle P-wave velocity was fixed to 7.9 km/s, in accord with tomographic models. A high sub-Moho Poisson's Ratio of 0.29 was required to fit the Sn-Pn differential times. This is suggestive of high sub-Moho temperatures, high shear wave attenuation and possibly partial melt. The combination of relatively thin crust in a region of high topography and high mantle temperatures suggests that the mantle plays a substantial role in maintaining the elevation.« less
Finite-element modelling of multilayer X-ray optics.
Cheng, Xianchao; Zhang, Lin
2017-05-01
Multilayer optical elements for hard X-rays are an attractive alternative to crystals whenever high photon flux and moderate energy resolution are required. Prediction of the temperature, strain and stress distribution in the multilayer optics is essential in designing the cooling scheme and optimizing geometrical parameters for multilayer optics. The finite-element analysis (FEA) model of the multilayer optics is a well established tool for doing so. Multilayers used in X-ray optics typically consist of hundreds of periods of two types of materials. The thickness of one period is a few nanometers. Most multilayers are coated on silicon substrates of typical size 60 mm × 60 mm × 100-300 mm. The high aspect ratio between the size of the optics and the thickness of the multilayer (10 7 ) can lead to a huge number of elements for the finite-element model. For instance, meshing by the size of the layers will require more than 10 16 elements, which is an impossible task for present-day computers. Conversely, meshing by the size of the substrate will produce a too high element shape ratio (element geometry width/height > 10 6 ), which causes low solution accuracy; and the number of elements is still very large (10 6 ). In this work, by use of ANSYS layer-functioned elements, a thermal-structural FEA model has been implemented for multilayer X-ray optics. The possible number of layers that can be computed by presently available computers is increased considerably.
Yennawar, Neela H; Fecko, Julia A; Showalter, Scott A; Bevilacqua, Philip C
2016-01-01
Many labs have conventional calorimeters where denaturation and binding experiments are setup and run one at a time. While these systems are highly informative to biopolymer folding and ligand interaction, they require considerable manual intervention for cleaning and setup. As such, the throughput for such setups is limited typically to a few runs a day. With a large number of experimental parameters to explore including different buffers, macromolecule concentrations, temperatures, ligands, mutants, controls, replicates, and instrument tests, the need for high-throughput automated calorimeters is on the rise. Lower sample volume requirements and reduced user intervention time compared to the manual instruments have improved turnover of calorimetry experiments in a high-throughput format where 25 or more runs can be conducted per day. The cost and efforts to maintain high-throughput equipment typically demands that these instruments be housed in a multiuser core facility. We describe here the steps taken to successfully start and run an automated biological calorimetry facility at Pennsylvania State University. Scientists from various departments at Penn State including Chemistry, Biochemistry and Molecular Biology, Bioengineering, Biology, Food Science, and Chemical Engineering are benefiting from this core facility. Samples studied include proteins, nucleic acids, sugars, lipids, synthetic polymers, small molecules, natural products, and virus capsids. This facility has led to higher throughput of data, which has been leveraged into grant support, attracting new faculty hire and has led to some exciting publications. © 2016 Elsevier Inc. All rights reserved.
On the Minimum Core Mass for Giant Planet Formation
NASA Astrophysics Data System (ADS)
Piso, Ana-Maria; Youdin, Andrew; Murray-Clay, Ruth
2013-07-01
The core accretion model proposes that giant planets form by the accretion of gas onto a solid protoplanetary core. Previous studies have found that there exists a "critical core mass" past which hydrostatic solutions can no longer be found and unstable atmosphere collapse occurs. This core mass is typically quoted to be around 10Me. In standard calculations of the critical core mass, planetesimal accretion deposits enough heat to alter the luminosity of the atmosphere, increasing the core mass required for the atmosphere to collapse. In this study we consider the limiting case in which planetesimal accretion is negligible and Kelvin-Helmholtz contraction dominates the luminosity evolution of the planet. We develop a two-layer atmosphere model with an inner convective region and an outer radiative zone that matches onto the protoplanetary disk, and we determine the minimum core mass for a giant planet to form within the typical disk lifetime for a variety of disk conditions. We denote this mass as critical core mass. The absolute minimum core mass required to nucleate atmosphere collapse is ˜ 8Me at 5 AU and steadily decreases to ˜ 3.5Me at 100 AU, for an ideal diatomic gas with a solar composition and a standard ISM opacity law. Lower opacity and disk temperature significantly reduce the critical core mass, while a decrease in the mean molecular weight of the nebular gas results in a larger critical core mass. Our results yield lower mass cores than corresponding studies for large planetesimal accretion rates.
Finite-element modelling of multilayer X-ray optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Xianchao; Zhang, Lin
Multilayer optical elements for hard X-rays are an attractive alternative to crystals whenever high photon flux and moderate energy resolution are required. Prediction of the temperature, strain and stress distribution in the multilayer optics is essential in designing the cooling scheme and optimizing geometrical parameters for multilayer optics. The finite-element analysis (FEA) model of the multilayer optics is a well established tool for doing so. Multilayers used in X-ray optics typically consist of hundreds of periods of two types of materials. The thickness of one period is a few nanometers. Most multilayers are coated on silicon substrates of typical sizemore » 60 mm × 60 mm × 100–300 mm. The high aspect ratio between the size of the optics and the thickness of the multilayer (10 7) can lead to a huge number of elements for the finite-element model. For instance, meshing by the size of the layers will require more than 10 16elements, which is an impossible task for present-day computers. Conversely, meshing by the size of the substrate will produce a too high element shape ratio (element geometry width/height > 10 6), which causes low solution accuracy; and the number of elements is still very large (10 6). In this work, by use of ANSYS layer-functioned elements, a thermal-structural FEA model has been implemented for multilayer X-ray optics. The possible number of layers that can be computed by presently available computers is increased considerably.« less
Belbachir, Farid; Pettorelli, Nathalie; Wacher, Tim; Belbachir-Bazi, Amel; Durant, Sarah M.
2015-01-01
Deserts are particularly vulnerable to human impacts and have already suffered a substantial loss of biodiversity. In harsh and variable desert environments, large herbivores typically occur at low densities, and their large carnivore predators occur at even lower densities. The continued survival of large carnivores is key to healthy functioning desert ecosystems, and the ability to gather reliable information on these rare low density species, including presence, abundance and density, is critical to their monitoring and management. Here we test camera trap methodologies as a monitoring tool for an extremely rare wide-ranging large felid, the critically endangered Saharan cheetah (Acinonyx jubatus hecki). Two camera trapping surveys were carried out over 2–3 months across a 2,551km2 grid in the Ti-n-hağğen region in the Ahaggar Cultural Park, south central Algeria. A total of 32 records of Saharan cheetah were obtained. We show the behaviour and ecology of the Saharan cheetah is severely constrained by the harsh desert environment, leading them to be more nocturnal, be more wide-ranging, and occur at lower densities relative to cheetah in savannah environments. Density estimates ranged from 0.21–0.55/1,000km2, some of the lowest large carnivore densities ever recorded in Africa, and average home range size over 2–3 months was estimated at 1,583km2. We use our results to predict that, in order to detect presence of cheetah with p>0.95 a survey effort of at least 1,000 camera trap days is required. Our study identifies the Ahaggar Cultural Park as a key area for the conservation of the Saharan cheetah. The Saharan cheetah meets the requirements for a charismatic flagship species that can be used to “market” the Saharan landscape at a sufficiently large scale to help reverse the historical neglect of threatened Saharan ecosystems. PMID:25629400
Belbachir, Farid; Pettorelli, Nathalie; Wacher, Tim; Belbachir-Bazi, Amel; Durant, Sarah M
2015-01-01
Deserts are particularly vulnerable to human impacts and have already suffered a substantial loss of biodiversity. In harsh and variable desert environments, large herbivores typically occur at low densities, and their large carnivore predators occur at even lower densities. The continued survival of large carnivores is key to healthy functioning desert ecosystems, and the ability to gather reliable information on these rare low density species, including presence, abundance and density, is critical to their monitoring and management. Here we test camera trap methodologies as a monitoring tool for an extremely rare wide-ranging large felid, the critically endangered Saharan cheetah (Acinonyx jubatus hecki). Two camera trapping surveys were carried out over 2-3 months across a 2,551 km2 grid in the Ti-n-hağğen region in the Ahaggar Cultural Park, south central Algeria. A total of 32 records of Saharan cheetah were obtained. We show the behaviour and ecology of the Saharan cheetah is severely constrained by the harsh desert environment, leading them to be more nocturnal, be more wide-ranging, and occur at lower densities relative to cheetah in savannah environments. Density estimates ranged from 0.21-0.55/1,000 km2, some of the lowest large carnivore densities ever recorded in Africa, and average home range size over 2-3 months was estimated at 1,583 km2. We use our results to predict that, in order to detect presence of cheetah with p>0.95 a survey effort of at least 1,000 camera trap days is required. Our study identifies the Ahaggar Cultural Park as a key area for the conservation of the Saharan cheetah. The Saharan cheetah meets the requirements for a charismatic flagship species that can be used to "market" the Saharan landscape at a sufficiently large scale to help reverse the historical neglect of threatened Saharan ecosystems.
Atypical and Typical Antipsychotics in the Schools
ERIC Educational Resources Information Center
Noggle, Chad A.; Dean, Raymond S.
2009-01-01
The use of antipsychotic medications within the school-age population is rapidly increasing. Although typical antipsychotics may be used in rare cases, this influx is largely secondary to the availability of the atypical antipsychotics. Reduction of possible adverse effects and increased efficacy represent the primary basis for the atypical…
Polarimetric ISAR: Simulation and image reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, David H.
In polarimetric ISAR the illumination platform, typically airborne, carries a pair of antennas that are directed toward a fixed point on the surface as the platform moves. During platform motion, the antennas maintain their gaze on the point, creating an effective aperture for imaging any targets near that point. The interaction between the transmitted fields and targets (e.g. ships) is complicated since the targets are typically many wavelengths in size. Calculation of the field scattered from the target typically requires solving Maxwell’s equations on a large three-dimensional numerical grid. This is prohibitive to use in any real-world imaging algorithm, somore » the scattering process is typically simplified by assuming the target consists of a cloud of independent, non-interacting, scattering points (centers). Imaging algorithms based on this scattering model perform well in many applications. Since polarimetric radar is not very common, the scattering model is often derived for a scalar field (single polarization) where the individual scatterers are assumed to be small spheres. However, when polarization is important, we must generalize the model to explicitly account for the vector nature of the electromagnetic fields and its interaction with objects. In this note, we present a scattering model that explicitly includes the vector nature of the fields but retains the assumption that the individual scatterers are small. The response of the scatterers is described by electric and magnetic dipole moments induced by the incident fields. We show that the received voltages in the antennas are linearly related to the transmitting currents through a scattering impedance matrix that depends on the overall geometry of the problem and the nature of the scatterers.« less
NASA Astrophysics Data System (ADS)
OBrien, R. E.; Ridley, K. J.; Canagaratna, M. R.; Croteau, P.; Budisulistiorini, S. H.; Cui, T.; Green, H. S.; Surratt, J. D.; Jayne, J. T.; Kroll, J. H.
2016-12-01
A thorough understanding of the sources, evolution, and budgets of atmospheric organic aerosol requires widespread measurements of the amount and chemical composition of atmospheric organic carbon in the condensed phase (within particles and water droplets). Collecting such datasets requires substantial spatial and temporal (long term) coverage, which can be challenging when relying on online measurements by state-of-the-art research-grade instrumentation (such as those used in atmospheric chemistry field studies). Instead, samples are routinely collected using relatively low-cost techniques, such as aerosol filters, for offline analysis of their chemical composition. However, measurements made by online and offline instruments can be fundamentally different, leading to disparities between data from field studies and those from more routine monitoring. To better connect these two approaches, and take advantage of the benefits of each, we have developed a method to introduce collected samples into online aerosol instruments using nebulization. Because nebulizers typically require tens to hundreds of milliliters of solution, limiting this technique to large samples, we developed a new, ultrasonic micro-nebulizer that requires only small volumes (tens of microliters) of sample for chemical analysis. The nebulized (resuspended) sample is then sent into a high-resolution Aerosol Mass Spectrometer (AMS), a widely-used instrument that provides key information on the chemical composition of aerosol particulate matter (elemental ratios, carbon oxidation state, etc.), measurements that are not typically made for collected atmospheric samples. Here, we compare AMS data collected using standard on-line techniques with our offline analysis, demonstrating the utility of this new technique to aerosol filter samples. We then apply this approach to organic aerosol filter samples collected in remote regions, as well as rainwater samples from across the US. This data provides information on the sample composition and changes in key chemical characteristics across locations and seasons.
Graphite/Cyanate Ester Face Sheets for Adaptive Optics
NASA Technical Reports Server (NTRS)
Bennett, Harold; Shaffer, Joseph; Romeo, Robert
2008-01-01
It has been proposed that thin face sheets of wide-aperture deformable mirrors in adaptive-optics systems be made from a composite material consisting of cyanate ester filled with graphite. This composite material appears to offer an attractive alternative to low-thermal-expansion glasses that are used in some conventional optics and have been considered for adaptive-optics face sheets. Adaptive-optics face sheets are required to have maximum linear dimensions of the order of meters or even tens of meters for some astronomical applications. If the face sheets were to be made from low-thermal-expansion glasses, then they would also be required to have thicknesses of the order of a millimeter so as to obtain the optimum compromise between the stiffness needed for support and the flexibility needed to enable deformation to controlled shapes by use of actuators. It is difficult to make large glass sheets having thicknesses less than 3 mm, and 3-mm-thick glass sheets are too stiff to be deformable to the shapes typically required for correction of wavefronts of light that has traversed the terrestrial atmosphere. Moreover, the primary commercially produced candidate low-thermal-expansion glass is easily fractured when in the form of thin face sheets. Graphite-filled cyanate ester has relevant properties similar to those of the low-expansion glasses. These properties include a coefficient of thermal expansion (CTE) of the order of a hundredth of the CTEs of other typical mirror materials. The Young s modulus (which quantifies stiffness in tension and compression) of graphite-filled cyanate ester is also similar to the Young's moduli of low-thermal-expansion glasses. However, the fracture toughness of graphite-filled cyanate ester is much greater than that of the primary candidate low-thermal-expansion glass. Therefore, graphite-filled cyanate ester could be made into nearly unbreakable face sheets, having maximum linear dimensions greater than a meter and thicknesses of the order of a millimeter, that would satisfy the requirements for use in adaptive optics.
Development and Implementation of an Integrated Science Course for Elementary Eduation Majors
NASA Astrophysics Data System (ADS)
Gunter, Mickey E.; Gammon, Steven D.; Kearney, Robert J.; Waller, Brenda E.; Oliver, David J.
1997-02-01
Currently the scientific community is trying to increase the general populationapos;s knowledge of science. These efforts stem from the fact that the citizenry needs a better understanding of scientific knowledge to make informed decisions on many issues of current concern. The problem of scientific illiteracy begins in grade school and can be traced to inadequate exposure to science and scientific thinking during the preparation of K - 8 teachers. Typically preservice elementary teachers are required to take only one or two disconnected science courses to obtain their teaching certificates. Also, introductory science courses are often large and impersonal, with the result that while students pass the courses, they may learn very little and retain even less.
Highly stretchable polymer semiconductor films through the nanoconfinement effect
NASA Astrophysics Data System (ADS)
Xu, Jie; Wang, Sihong; Wang, Ging-Ji Nathan; Zhu, Chenxin; Luo, Shaochuan; Jin, Lihua; Gu, Xiaodan; Chen, Shucheng; Feig, Vivian R.; To, John W. F.; Rondeau-Gagné, Simon; Park, Joonsuk; Schroeder, Bob C.; Lu, Chien; Oh, Jin Young; Wang, Yanming; Kim, Yun-Hi; Yan, He; Sinclair, Robert; Zhou, Dongshan; Xue, Gi; Murmann, Boris; Linder, Christian; Cai, Wei; Tok, Jeffery B.-H.; Chung, Jong Won; Bao, Zhenan
2017-01-01
Soft and conformable wearable electronics require stretchable semiconductors, but existing ones typically sacrifice charge transport mobility to achieve stretchability. We explore a concept based on the nanoconfinement of polymers to substantially improve the stretchability of polymer semiconductors, without affecting charge transport mobility. The increased polymer chain dynamics under nanoconfinement significantly reduces the modulus of the conjugated polymer and largely delays the onset of crack formation under strain. As a result, our fabricated semiconducting film can be stretched up to 100% strain without affecting mobility, retaining values comparable to that of amorphous silicon. The fully stretchable transistors exhibit high biaxial stretchability with minimal change in on current even when poked with a sharp object. We demonstrate a skinlike finger-wearable driver for a light-emitting diode.
Seismic instrumentation of buildings
Çelebi, Mehmet
2000-01-01
The purpose of this report is to provide information on how and why we deploy seismic instruments in and around building structures. The recorded response data from buildings and other instrumented structures can be and are being primarily used to facilitate necessary studies to improve building codes and therefore reduce losses of life and property during damaging earthquakes. Other uses of such data can be in emergency response situations in large urban environments. The report discusses typical instrumentation schemes, existing instrumentation programs, the steps generally followed in instrumenting a structure, selection and type of instruments, installation and maintenance requirements and data retrieval and processing issues. In addition, a summary section on how recorded response data have been utilized is included. The benefits from instrumentation of structural systems are discussed.
Global constraints on vector-like WIMP effective interactions
Blennow, Mattias; Coloma, Pilar; Fernandez-Martinez, Enrique; ...
2016-04-07
In this work we combine information from relic abundance, direct detection, cosmic microwave background, positron fraction, gamma rays, and colliders to explore the existing constraints on couplings between Dark Matter and Standard Model constituents when no underlying model or correlation is assumed. For definiteness, we include independent vector-like effective interactions for each Standard Model fermion. Our results show that low Dark Matter masses below 20 GeV are disfavoured at the 3 σ level with respect to higher masses, due to the tension between the relic abundance requirement and upper constraints on the Dark Matter couplings. Lastly, large couplings are typically onlymore » allowed in combinations which avoid effective couplings to the nuclei used in direct detection experiments.« less
Computing diffusivities from particle models out of equilibrium
NASA Astrophysics Data System (ADS)
Embacher, Peter; Dirr, Nicolas; Zimmer, Johannes; Reina, Celia
2018-04-01
A new method is proposed to numerically extract the diffusivity of a (typically nonlinear) diffusion equation from underlying stochastic particle systems. The proposed strategy requires the system to be in local equilibrium and have Gaussian fluctuations but it is otherwise allowed to undergo arbitrary out-of-equilibrium evolutions. This could be potentially relevant for particle data obtained from experimental applications. The key idea underlying the method is that finite, yet large, particle systems formally obey stochastic partial differential equations of gradient flow type satisfying a fluctuation-dissipation relation. The strategy is here applied to three classic particle models, namely independent random walkers, a zero-range process and a symmetric simple exclusion process in one space dimension, to allow the comparison with analytic solutions.
Lenz's Law Demonstration Using an Ultrasound Position Sensor
NASA Astrophysics Data System (ADS)
Fodor, Petru S.; Peppard, Tara
2012-09-01
One of the very popular demonstrations used in introductory physics courses to illustrate Lenz's law is the "slowly falling magnet." In its simplest version it requires only a powerful cylindrical magnet and a metal tube, typically of copper or aluminum. When dropped in the tube the magnet takes significantly longer to reach the other end than a geometrically similar but nonmagnetic object. This demonstration has been adapted for use in large classes using a camera to monitor the magnet as it approaches the end of the tube. Small versions that can be used for hands-on experiments also have been developed2 or are available commercially.3 This classical demonstration in its various forms almost never fails to impress first-time viewers.
Collaborative voxel-based surgical virtual environments.
Acosta, Eric; Muniz, Gilbert; Armonda, Rocco; Bowyer, Mark; Liu, Alan
2008-01-01
Virtual Reality-based surgical simulators can utilize Collaborative Virtual Environments (C-VEs) to provide team-based training. To support real-time interactions, C-VEs are typically replicated on each user's local computer and a synchronization method helps keep all local copies consistent. This approach does not work well for voxel-based C-VEs since large and frequent volumetric updates make synchronization difficult. This paper describes a method that allows multiple users to interact within a voxel-based C-VE for a craniotomy simulator being developed. Our C-VE method requires smaller update sizes and provides faster synchronization update rates than volumetric-based methods. Additionally, we address network bandwidth/latency issues to simulate networked haptic and bone drilling tool interactions with a voxel-based skull C-VE.
Guerra Valero, Yarmarly C; Wallis, Steven C; Lipman, Jeffrey; Stove, Christophe; Roberts, Jason A; Parker, Suzanne L
2018-03-01
Conventional sampling techniques for clinical pharmacokinetic studies often require the removal of large blood volumes from patients. This can result in a physiological or emotional burden, particularly for neonates or pediatric patients. Antibiotic pharmacokinetic studies are typically performed on healthy adults or general ward patients. These may not account for alterations to a patient's pathophysiology and can lead to suboptimal treatment. Microsampling offers an important opportunity for clinical pharmacokinetic studies in vulnerable patient populations, where smaller sample volumes can be collected. This systematic review provides a description of currently available microsampling techniques and an overview of studies reporting the quantitation and validation of antibiotics using microsampling. A comparison of microsampling to conventional sampling in clinical studies is included.
High Efficiency InP Solar Cells from Low Toxicity Tertiarybutylphosphine
NASA Technical Reports Server (NTRS)
Hoffman, Richard W., Jr.; Fatemi, Navid S.; Wilt, David M.; Jenkins, Phillip P.; Brinker, David J.; Scheiman, David A.
1994-01-01
Large scale manufacture of phosphide based semiconductor devices by organo-metallic vapor phase epitaxy (OMVPE) typically requires the use of highly toxic phosphine. Advancements in phosphine substitutes have identified tertiarybutylphosphine (TBP) as an excellent precursor for OMVPE of InP. High quality undoped and doped InP films were grown using TBP and trimethylindium. Impurity doped InP films were achieved utilizing diethylzinc and silane for p and n type respectively. 16 percent efficient solar cells under air mass zero, one sun intensity were demonstrated with Voc of 871 mV and fill factor of 82.6 percent. It was shown that TBP could replace phosphine, without adversely affecting device quality, in OMVPE deposition of InP thus significantly reducing toxic gas exposure risk.
On the ghost-induced instability on de Sitter background
NASA Astrophysics Data System (ADS)
Peter, Patrick; Salles, Filipe de O.; Shapiro, Ilya L.
2018-03-01
It is known that the perturbative instability of tensor excitations in higher derivative gravity may not take place if the initial frequency of the gravitational waves is below the Planck threshold. One can assume that this is a natural requirement if the cosmological background is sufficiently mild, since in this case the situation is qualitatively close to the free gravitational wave in flat space. Here, we explore the opposite situation and consider the effect of a very far from Minkowski radiation-dominated or de Sitter cosmological background with a large Hubble rate, e.g., typical of an inflationary period. It turns out that, then, for initial Planckian or even trans-Planckian frequencies, the instability is rapidly suppressed by the very fast expansion of the Universe.
Evaluation of ultra-low background materials for uranium and thorium using ICP-MS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoppe, E. W.; Overman, N. R.; LaFerriere, B. D.
2013-08-08
An increasing number of physics experiments require low background materials for their construction. The presence of Uranium and Thorium and their progeny in these materials present a variety of unwanted background sources for these experiments. The sensitivity of the experiments continues to drive the necessary levels of detection ever lower as well. This requirement for greater sensitivity has rendered direct radioassay impractical in many cases requiring large quantities of material, frequently many kilograms, and prolonged counting times, often months. Other assay techniques have been employed such as Neutron Activation Analysis but this requires access to expensive facilities and instrumentation andmore » can be further complicated and delayed by the formation of unwanted radionuclides. Inductively Coupled Plasma Mass Spectrometry (ICP-MS) is a useful tool and recent advancements have increased the sensitivity particularly in the elemental high mass range of U and Th. Unlike direct radioassay, ICP-MS is a destructive technique since it requires the sample to be in liquid form which is aspirated into a high temperature plasma. But it benefits in that it usually requires a very small sample, typically about a gram. This paper discusses how a variety of low background materials such as copper, polymers, and fused silica are made amenable to ICP-MS assay and how the arduous task of maintaining low backgrounds of U and Th is achieved.« less
Evaluation of Ultra-Low Background Materials for Uranium and Thorium Using ICP-MS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoppe, Eric W.; Overman, Nicole R.; LaFerriere, Brian D.
2013-08-08
An increasing number of physics experiments require low background materials for their construction. The presence of Uranium and Thorium and their progeny in these materials present a variety of unwanted background sources for these experiments. The sensitivity of the experiments continues to drive the necessary levels of detection ever lower as well. This requirement for greater sensitivity has rendered direct radioassay impractical in many cases requiring large quantities of material, frequently many kilograms, and prolonged counting times, often months. Other assay techniques have been employed such as Neutron Activation Analysis but this requires access to expensive facilities and instrumentation andmore » can be further complicated and delayed by the formation of unwanted radionuclides. Inductively Coupled Plasma Mass Spectrometry (ICP-MS) is a useful tool and recent advancements have increased the sensitivity particularly in the elemental high mass range of U and Th. Unlike direct radioassay, ICP-MS is a destructive technique since it requires the sample to be in liquid form which is aspirated into a high temperature plasma. But it benefits in that it usually requires a very small sample, typically about a gram. Here we will discuss how a variety of low background materials such as copper, polymers, and fused silica are made amenable to ICP-MS assay and how the arduous task of maintaining low backgrounds of U and Th is achieved.« less
V/STOL propulsion control analysis: Phase 2, task 5-9
NASA Technical Reports Server (NTRS)
1981-01-01
Typical V/STOL propulsion control requirements were derived for transition between vertical and horizontal flight using the General Electric RALS (Remote Augmented Lift System) concept. Steady-state operating requirements were defined for a typical Vertical-to-Horizontal transition and for a typical Horizontal-to-Vertical transition. Control mode requirements were established and multi-variable regulators developed for individual operating conditions. Proportional/Integral gain schedules were developed and were incorporated into a transition controller with capabilities for mode switching and manipulated variable reassignment. A non-linear component-level transient model of the engine was developed and utilized to provide a preliminary check-out of the controller logic. An inlet and nozzle effects model was developed for subsequent incorporation into the engine model and an aircraft model was developed for preliminary flight transition simulations. A condition monitoring development plan was developed and preliminary design requirements established. The Phase 1 long-range technology plan was refined and restructured toward the development of a real-time high fidelity transient model of a supersonic V/STOL propulsion system and controller for use in a piloted simulation program at NASA-Ames.
28 CFR 61.5 - Typical classes of action.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... These classes are: actions normally requiring environmental impact statements (EIS), actions normally not requiring assessments or EIS, and actions normally requiring assessments but not necessarily EIS...) Actions normally requiring EIS. None, except as noted in the appendices to this part. (2) Actions normally...
Data Intensive Analysis of Biomolecular Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straatsma, TP; Soares, Thereza A.
2007-12-01
The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less
Data Intensive Analysis of Biomolecular Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straatsma, TP
2008-03-01
The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less
Spatial considerations during cryopreservation of a large volume sample.
Kilbride, Peter; Lamb, Stephen; Milne, Stuart; Gibbons, Stephanie; Erro, Eloy; Bundy, James; Selden, Clare; Fuller, Barry; Morris, John
2016-08-01
There have been relatively few studies on the implications of the physical conditions experienced by cells during large volume (litres) cryopreservation - most studies have focused on the problem of cryopreservation of smaller volumes, typically up to 2 ml. This study explores the effects of ice growth by progressive solidification, generally seen during larger scale cryopreservation, on encapsulated liver hepatocyte spheroids, and it develops a method to reliably sample different regions across the frozen cores of samples experiencing progressive solidification. These issues are examined in the context of a Bioartificial Liver Device which requires cryopreservation of a 2 L volume in a strict cylindrical geometry for optimal clinical delivery. Progressive solidification cannot be avoided in this arrangement. In such a system optimal cryoprotectant concentrations and cooling rates are known. However, applying these parameters to a large volume is challenging due to the thermal mass and subsequent thermal lag. The specific impact of this to the cryopreservation outcome is required. Under conditions of progressive solidification, the spatial location of Encapsulated Liver Spheroids had a strong impact on post-thaw recovery. Cells in areas first and last to solidify demonstrated significantly impaired post-thaw function, whereas areas solidifying through the majority of the process exhibited higher post-thaw outcome. It was also found that samples where the ice thawed more rapidly had greater post-thaw viability 24 h post-thaw (75.7 ± 3.9% and 62.0 ± 7.2% respectively). These findings have implications for the cryopreservation of large volumes with a rigid shape and for the cryopreservation of a Bioartificial Liver Device. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
A MBD-seq protocol for large-scale methylome-wide studies with (very) low amounts of DNA.
Aberg, Karolina A; Chan, Robin F; Shabalin, Andrey A; Zhao, Min; Turecki, Gustavo; Staunstrup, Nicklas Heine; Starnawska, Anna; Mors, Ole; Xie, Lin Y; van den Oord, Edwin Jcg
2017-09-01
We recently showed that, after optimization, our methyl-CpG binding domain sequencing (MBD-seq) application approximates the methylome-wide coverage obtained with whole-genome bisulfite sequencing (WGB-seq), but at a cost that enables adequately powered large-scale association studies. A prior drawback of MBD-seq is the relatively large amount of genomic DNA (ideally >1 µg) required to obtain high-quality data. Biomaterials are typically expensive to collect, provide a finite amount of DNA, and may simply not yield sufficient starting material. The ability to use low amounts of DNA will increase the breadth and number of studies that can be conducted. Therefore, we further optimized the enrichment step. With this low starting material protocol, MBD-seq performed equally well, or better, than the protocol requiring ample starting material (>1 µg). Using only 15 ng of DNA as input, there is minimal loss in data quality, achieving 93% of the coverage of WGB-seq (with standard amounts of input DNA) at similar false/positive rates. Furthermore, across a large number of genomic features, the MBD-seq methylation profiles closely tracked those observed for WGB-seq with even slightly larger effect sizes. This suggests that MBD-seq provides similar information about the methylome and classifies methylation status somewhat more accurately. Performance decreases with <15 ng DNA as starting material but, even with as little as 5 ng, MBD-seq still achieves 90% of the coverage of WGB-seq with comparable genome-wide methylation profiles. Thus, the proposed protocol is an attractive option for adequately powered and cost-effective methylome-wide investigations using (very) low amounts of DNA.
Numerical Simulations of Hypersonic Boundary Layer Transition
NASA Astrophysics Data System (ADS)
Bartkowicz, Matthew David
Numerical schemes for supersonic flows tend to use large amounts of artificial viscosity for stability. This tends to damp out the small scale structures in the flow. Recently some low-dissipation methods have been proposed which selectively eliminate the artificial viscosity in regions which do not require it. This work builds upon the low-dissipation method of Subbareddy and Candler which uses the flux vector splitting method of Steger and Warming but identifies the dissipation portion to eliminate it. Computing accurate fluxes typically relies on large grid stencils or coupled linear systems that become computationally expensive to solve. Unstructured grids allow for CFD solutions to be obtained on complex geometries, unfortunately, it then becomes difficult to create a large stencil or the coupled linear system. Accurate solutions require grids that quickly become too large to be feasible. In this thesis a method is proposed to obtain more accurate solutions using relatively local data, making it suitable for unstructured grids composed of hexahedral elements. Fluxes are reconstructed using local gradients to extend the range of data used. The method is then validated on several test problems. Simulations of boundary layer transition are then performed. An elliptic cone at Mach 8 is simulated based on an experiment at the Princeton Gasdynamics Laboratory. A simulated acoustic noise boundary condition is imposed to model the noisy conditions of the wind tunnel and the transitioning boundary layer observed. A computation of an isolated roughness element is done based on an experiment in Purdue's Mach 6 quiet wind tunnel. The mechanism for transition is identified as an instability in the upstream separation region and a comparison is made to experimental data. In the CFD a fully turbulent boundary layer is observed downstream.
Mathematical and numerical challenges in living biological materials
NASA Astrophysics Data System (ADS)
Forest, M. Gregory; Vasquez, Paula A.
2013-10-01
The proclaimed Century of Biology is rapidly leading to the realization of how starkly different and more complex biological materials are than the materials that underpinned the industrial and technological revolution. These differences arise, in part, because biological matter exhibits both viscous and elastic behavior. Moreover, this behavior varies across the frequency, wavelength and amplitude spectrum of forcing. This broadclass of responsesin biological matter requires multiple frequency-dependent functions to specify material behavior, instead of a discrete set of parameters that relate to either viscosity or elasticity. This complexity prevails even if the biological matter is assumed to be spatially homogeneous, which is rarely true. However, very little progress has been made on the characterization of heterogeneity and how to build that information into constitutive laws and predictive models. In addition, most biological matter is non-stationary, which motivates the term "living". Biomaterials typically are in an active state in order to perform certain functions, and they often are modified or replenished on the basis of external stimuli. It has become popular in materials engineering to try to duplicate some of the functionality of biomaterials, e.g., a lot of effort has gone into the design of self-assembling, self-healing and shape shifting materials. These distinguishing features of biomaterials require significantly more degrees of freedom than traditional composites and many of the molecular species and their roles in functionality have yet to be determined. A typical biological material includes small molecule biochemical species that react and diffuse within larger species. These large molecular weightspecies provide the primary structural and biophysical properties of the material. The small molecule binding and unbinding kinetics serves to modulate material properties, and typical small molecule production and release are governed by external stimuli (e.g., stress). The bottom line is that the mathematical and numerical tools of 20th Century materials science are often insufficient for describing biological materials and for predicting their behavior both in vitro and in vivo.
Centrifuge: rapid and sensitive classification of metagenomic sequences.
Kim, Daehwan; Song, Li; Breitwieser, Florian P; Salzberg, Steven L
2016-12-01
Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. © 2016 Kim et al.; Published by Cold Spring Harbor Laboratory Press.
Processing Satellite Images on Tertiary Storage: A Study of the Impact of Tile Size on Performance
NASA Technical Reports Server (NTRS)
Yu, JieBing; DeWitt, David J.
1996-01-01
Before raw data from a satellite can be used by an Earth scientist, it must first undergo a number of processing steps including basic processing, cleansing, and geo-registration. Processing actually expands the volume of data collected by a factor of 2 or 3 and the original data is never deleted. Thus processing and storage requirements can exceed 2 terrabytes/day. Once processed data is ready for analysis, a series of algorithms (typically developed by the Earth scientists) is applied to a large number of images in a data set. The focus of this paper is how best to handle such images stored on tape using the following assumptions: (1) all images of interest to a scientist are stored on a single tape, (2) images are accessed and processed in the order that they are stored on tape, and (3) the analysis requires access to only a portion of each image and not the entire image.
Night Waking, Sleep-Wake Organization, and Self-Soothing in the First Year of Life
GOODLIN-JONES, BETH L.; BURNHAM, MELISSA M.; GAYLOR, ERIKA E.; ANDERS, THOMAS F.
2005-01-01
Few objective data are available regarding infants’ night waking behaviors and the development of self-soothing during the first year of life. This cross-sectional study examined 80 infants in one of four age groups (3, 6, 9, or 12 mo) for four nights by using videosomnography to code nighttime awakenings and parent-child interactions. A large degree of variability was observed in parents’ putting the infant to bed awake or asleep and in responding to vocalizations after nighttime awakenings. Most infants woke during the night at all ages observed. Younger infants tended to require parental intervention at night to return to sleep, whereas older infants exhibited a greater proportion of self-soothing after nighttime awakenings. However, even in the 12-month-old group, 50% of infants typically required parental intervention to get back to sleep after waking. Results emphasize the individual and contextual factors that effect the development of self-soothing behavior during the first year of life. PMID:11530895
Isotropic transmission of magnon spin information without a magnetic field.
Haldar, Arabinda; Tian, Chang; Adeyeye, Adekunle Olusola
2017-07-01
Spin-wave devices (SWD), which use collective excitations of electronic spins as a carrier of information, are rapidly emerging as potential candidates for post-semiconductor non-charge-based technology. Isotropic in-plane propagating coherent spin waves (magnons), which require magnetization to be out of plane, is desirable in an SWD. However, because of lack of availability of low-damping perpendicular magnetic material, a usually well-known in-plane ferrimagnet yttrium iron garnet (YIG) is used with a large out-of-plane bias magnetic field, which tends to hinder the benefits of isotropic spin waves. We experimentally demonstrate an SWD that eliminates the requirement of external magnetic field to obtain perpendicular magnetization in an otherwise in-plane ferromagnet, Ni 80 Fe 20 or permalloy (Py), a typical choice for spin-wave microconduits. Perpendicular anisotropy in Py, as established by magnetic hysteresis measurements, was induced by the exchange-coupled Co/Pd multilayer. Isotropic propagation of magnon spin information has been experimentally shown in microconduits with three channels patterned at arbitrary angles.
A global dataset of crowdsourced land cover and land use reference data.
Fritz, Steffen; See, Linda; Perger, Christoph; McCallum, Ian; Schill, Christian; Schepaschenko, Dmitry; Duerauer, Martina; Karner, Mathias; Dresel, Christopher; Laso-Bayas, Juan-Carlos; Lesiv, Myroslava; Moorthy, Inian; Salk, Carl F; Danylo, Olha; Sturn, Tobias; Albrecht, Franziska; You, Liangzhi; Kraxner, Florian; Obersteiner, Michael
2017-06-13
Global land cover is an essential climate variable and a key biophysical driver for earth system models. While remote sensing technology, particularly satellites, have played a key role in providing land cover datasets, large discrepancies have been noted among the available products. Global land use is typically more difficult to map and in many cases cannot be remotely sensed. In-situ or ground-based data and high resolution imagery are thus an important requirement for producing accurate land cover and land use datasets and this is precisely what is lacking. Here we describe the global land cover and land use reference data derived from the Geo-Wiki crowdsourcing platform via four campaigns. These global datasets provide information on human impact, land cover disagreement, wilderness and land cover and land use. Hence, they are relevant for the scientific community that requires reference data for global satellite-derived products, as well as those interested in monitoring global terrestrial ecosystems in general.
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.
1973-01-01
The NASTRAN computer program is capable of executing on three different types of computers: (1) the CDC 6000 series, (2) the IBM 360-370 series, and (3) the Univac 1100 series. A typical activity requiring transfer of data between dissimilar computers is the analysis of a large structure such as the space shuttle by substructuring. Models of portions of the vehicle which have been analyzed by subcontractors using their computers must be integrated into a model of the complete structure by the prime contractor on his computer. Presently the transfer of NASTRAN matrices or tables between two different types of computers is accomplished by punched cards or a magnetic tape containing card images. These methods of data transfer do not satisfy the requirements for intercomputer data transfer associated with a substructuring activity. To provide a more satisfactory transfer of data, two new programs, RDUSER and WRTUSER, were created.
Future trends in commercial and military systems
NASA Astrophysics Data System (ADS)
Bond, F. E.
Commercial and military satellite communication systems are addressed, with a review of current applications and typical communication characteristics of the space and earth segments. Drivers for the development of future commercial systems include: the pervasion of digital techniques and services, growing orbit and frequency congestion, demand for more entertainment, and the large potential market for commercial 'roof-top' service. For military systems, survivability, improved flexibility, and the need for service to small mobile terminals are the principal factors involved. Technical trends include the use of higher frequency bands, multibeam antennas and a significant increase in the application of onboard processing. Military systems will employ a variety of techniques to counter both physical and electronic threats. The use of redundant transmission paths is a particularly effective approach. Successful implementation requires transmission standards to achieve the required interoperability among the pertinent networks. For both the military and commercial sectors, the trend toward larger numbers of terminals and more complex spacecraft is still persisting.
Efficient dielectric metasurface collimating lenses for mid-infrared quantum cascade lasers.
Arbabi, Amir; Briggs, Ryan M; Horie, Yu; Bagheri, Mahmood; Faraon, Andrei
2015-12-28
Light emitted from single-mode semiconductor lasers generally has large divergence angles, and high numerical aperture lenses are required for beam collimation. Visible and near infrared lasers are collimated using aspheric glass or plastic lenses, yet collimation of mid-infrared quantum cascade lasers typically requires more costly aspheric lenses made of germanium, chalcogenide compounds, or other infrared-transparent materials. Here we report mid-infrared dielectric metasurface flat lenses that efficiently collimate the output beam of single-mode quantum cascade lasers. The metasurface lenses are composed of amorphous silicon posts on a flat sapphire substrate and can be fabricated at low cost using a single step conventional UV binary lithography. Mid-infrared radiation from a 4.8 μm distributed-feedback quantum cascade laser is collimated using a polarization insensitive metasurface lens with 0.86 numerical aperture and 79% transmission efficiency. The collimated beam has a half divergence angle of 0.36° and beam quality factor of M2=1.02.
An Improved Algorithm to Generate a Wi-Fi Fingerprint Database for Indoor Positioning
Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi
2013-01-01
The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase. PMID:23966197
Antibiotic Stewardship in Small Hospitals: Barriers and Potential Solutions.
Stenehjem, Edward; Hyun, David Y; Septimus, Ed; Yu, Kalvin C; Meyer, Marc; Raj, Deepa; Srinivasan, Arjun
2017-08-15
Antibiotic stewardship programs (ASPs) improve antibiotic prescribing. Seventy-three percent of US hospitals have <200 beds. Small hospitals (<200 beds) have similar rates of antibiotic prescribing compared to large hospitals, but the majority of small hospitals lack ASPs that satisfy the Centers for Disease Control and Prevention's core elements. All hospitals, regardless of size, are now required to have ASPs by The Joint Commission, and the Centers for Medicare and Medicaid Services has proposed a similar requirement. Very few studies have described the successful implementation of ASPs in small hospitals. We describe barriers commonly encountered in small hospitals when constructing an antibiotic stewardship team, obtaining appropriate metrics of antibiotic prescribing, implementing antibiotic stewardship interventions, obtaining financial resources, and utilizing the microbiology laboratory. We propose potential solutions that tailor stewardship activities to the needs of the facility and the resources typically available. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.
Quintero, Ignacio; Wiens, John J
2013-08-01
A key question in predicting responses to anthropogenic climate change is: how quickly can species adapt to different climatic conditions? Here, we take a phylogenetic approach to this question. We use 17 time-calibrated phylogenies representing the major tetrapod clades (amphibians, birds, crocodilians, mammals, squamates, turtles) and climatic data from distributions of > 500 extant species. We estimate rates of change based on differences in climatic variables between sister species and estimated times of their splitting. We compare these rates to predicted rates of climate change from 2000 to 2100. Our results are striking: matching projected changes for 2100 would require rates of niche evolution that are > 10,000 times faster than rates typically observed among species, for most variables and clades. Despite many caveats, our results suggest that adaptation to projected changes in the next 100 years would require rates that are largely unprecedented based on observed rates among vertebrate species. © 2013 John Wiley & Sons Ltd/CNRS.
The rotating spectrometer: Biotechnology for cell separations
NASA Technical Reports Server (NTRS)
Noever, David A.
1991-01-01
An instrument for biochemical studies, called the rotating spectrometer, separates previously inseparable cell cultures. The rotating spectrometer is intended for use in pharmacological studies which require fractional splitting of heterogeneous cell cultures based on cell morphology and swimming behavior. As a method to separate and concentrate cells in free solution, the rotating method requires active organism participation and can effectively split the large class of organisms known to form spontaneous patterns. Examples include the biochemical star, an organism called Tetrahymena pyriformis. Following focusing in a rotating frame, the separation is accomplished using different radial dependencies of concentrated algal and protozoan species. The focusing itself appears as concentric rings and arises from the coupling between swimming direction and Coriolis forces. A dense cut is taken at varying radii, and extraction is replenished at an inlet. Unlike standard separation and concentrating techniques such as filtration or centrifugation, the instrument is able to separate motile from immotile fractions. For a single pass, typical split efficiencies can reach 200 to 300 percent compared to the inlet concentration.
The rotating spectrometer: New biotechnology for cell separations
NASA Technical Reports Server (NTRS)
Noever, David A.; Matsos, Helen C.
1990-01-01
An instrument for biochemical studies, called the rotating spectrometer, separates previously inseparable cell cultures. The rotating spectrometer is intended for use in pharmacological studies which require fractional splitting of heterogeneous cell cultures based on cell morphology and swimming behavior. As a method to separate and concentrate cells in free solution, the rotating method requires active organism participation and can effectively split the large class of organisms known to form spontaneous patterns. Examples include the biochemical star, an organism called Tetrahymena pyriformis. Following focusing in a rotated frame, the separation is accomplished using different radial dependencies of concentrated algal and protozoan species. The focusing itself appears as concentric rings and arises from the coupling between swimming direction and Coriolis forces. A dense cut is taken at varying radii and extraction is replenished at an inlet. Unlike standard separation and concentrating techniques such as filtration or centrifugation, the instrument is able to separate motile from immotile fractions. For a single pass, typical split efficiencies can reach 200 to 300 percent compared to the inlet concentration.
An improved algorithm to generate a Wi-Fi fingerprint database for indoor positioning.
Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi
2013-08-21
The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase.
A quick response four decade logarithmic high-voltage stepping supply
NASA Technical Reports Server (NTRS)
Doong, H.
1978-01-01
An improved high-voltage stepping supply, for space instrumentation is described where low power consumption and fast settling time between steps are required. The high-voltage stepping supply, utilizing an average power of 750 milliwatts, delivers a pair of mirror images with 64 level logarithmic outputs. It covers a four decade range of + or - 2500 to + or - 0.29 volts having an output stability of + or - 0.5 percent or + or - 20 millivolts for all line load and temperature variations. The supply provides a typical step setting time of 1 millisecond with 100 microseconds for the lower two decades. The versatile design features of the high-voltage stepping supply provides a quick response staircase generator as described or a fixed voltage with the option to change levels as required over large dynamic ranges without circuit modifications. The concept can be implemented up to + or - 5000 volts. With these design features, the high-voltage stepping supply should find numerous applications where charged particle detection, electro-optical systems, and high voltage scientific instruments are used.
A global dataset of crowdsourced land cover and land use reference data
Fritz, Steffen; See, Linda; Perger, Christoph; McCallum, Ian; Schill, Christian; Schepaschenko, Dmitry; Duerauer, Martina; Karner, Mathias; Dresel, Christopher; Laso-Bayas, Juan-Carlos; Lesiv, Myroslava; Moorthy, Inian; Salk, Carl F.; Danylo, Olha; Sturn, Tobias; Albrecht, Franziska; You, Liangzhi; Kraxner, Florian; Obersteiner, Michael
2017-01-01
Global land cover is an essential climate variable and a key biophysical driver for earth system models. While remote sensing technology, particularly satellites, have played a key role in providing land cover datasets, large discrepancies have been noted among the available products. Global land use is typically more difficult to map and in many cases cannot be remotely sensed. In-situ or ground-based data and high resolution imagery are thus an important requirement for producing accurate land cover and land use datasets and this is precisely what is lacking. Here we describe the global land cover and land use reference data derived from the Geo-Wiki crowdsourcing platform via four campaigns. These global datasets provide information on human impact, land cover disagreement, wilderness and land cover and land use. Hence, they are relevant for the scientific community that requires reference data for global satellite-derived products, as well as those interested in monitoring global terrestrial ecosystems in general. PMID:28608851
Isotropic transmission of magnon spin information without a magnetic field
Haldar, Arabinda; Tian, Chang; Adeyeye, Adekunle Olusola
2017-01-01
Spin-wave devices (SWD), which use collective excitations of electronic spins as a carrier of information, are rapidly emerging as potential candidates for post-semiconductor non-charge-based technology. Isotropic in-plane propagating coherent spin waves (magnons), which require magnetization to be out of plane, is desirable in an SWD. However, because of lack of availability of low-damping perpendicular magnetic material, a usually well-known in-plane ferrimagnet yttrium iron garnet (YIG) is used with a large out-of-plane bias magnetic field, which tends to hinder the benefits of isotropic spin waves. We experimentally demonstrate an SWD that eliminates the requirement of external magnetic field to obtain perpendicular magnetization in an otherwise in-plane ferromagnet, Ni80Fe20 or permalloy (Py), a typical choice for spin-wave microconduits. Perpendicular anisotropy in Py, as established by magnetic hysteresis measurements, was induced by the exchange-coupled Co/Pd multilayer. Isotropic propagation of magnon spin information has been experimentally shown in microconduits with three channels patterned at arbitrary angles. PMID:28776033
Efficient dielectric metasurface collimating lenses for mid-infrared quantum cascade lasers
Arbabi, Amir; Briggs, Ryan M.; Horie, Yu; ...
2015-01-01
Light emitted from single-mode semiconductor lasers generally has large divergence angles, and high numerical aperture lenses are required for beam collimation. Visible and near infrared lasers are collimated using aspheric glass or plastic lenses, yet collimation of mid-infrared quantum cascade lasers typically requires more costly aspheric lenses made of germanium, chalcogenide compounds, or other infrared-transparent materials. We report mid-infrared dielectric metasurface flat lenses that efficiently collimate the output beam of single-mode quantum cascade lasers. The metasurface lenses are composed of amorphous silicon posts on a flat sapphire substrate and can be fabricated at low cost using a single step conventionalmore » UV binary lithography. Mid-infrared radiation from a 4.8 μm distributed-feedback quantum cascade laser is collimated using a polarization insensitive metasurface lens with 0.86 numerical aperture and 79% transmission efficiency. The collimated beam has a half divergence angle of 0.36° and beam quality factor of M² =1.02.« less
Analysis of the Interactions of Planetary Waves with the Mean Flow of the Stratosphere
NASA Technical Reports Server (NTRS)
Newman, Paul A.
2007-01-01
During the winter period, large scale waves (planetary waves) are observed to propagate from the troposphere into the stratosphere. Such wave events have been recognized since the 1 950s. The very largest wave events result in major stratospheric warmings. These large scale wave events have typical durations of a few days to 2 weeks. The wave events deposit easterly momentum in the stratosphere, decelerating the polar night jet and warming the polar region. In this presentation we show the typical characteristics of these events via a compositing analysis. We will show the typical periods and scales of motion and the associated decelerations and warmings. We will illustrate some of the differences between major and minor warming wave events. We will further illustrate the feedback by the mean flow on subsequent wave events.
NASA Astrophysics Data System (ADS)
Simpson, Emma; Connolly, Paul; McFiggans, Gordon
2016-04-01
Processes such as precipitation and radiation depend on the concentration and size of different hydrometeors within clouds therefore it is important to accurately predict them in weather and climate models. A large fraction of clouds present in our atmosphere are mixed phase; contain both liquid and ice particles. The number of drops and ice crystals present in mixed phase clouds strongly depends on the size distribution of aerosols. Cloud condensation nuclei (CCN), a subset of atmospheric aerosol particles, are required for liquid drops to form in the atmosphere. These particles are ubiquitous in the atmosphere. To nucleate ice particles in mixed phase clouds ice nucleating particles (INP) are required. These particles are rarer than CCN. Here we investigate the case where CCN and INPs are in direct competition with each other for water vapour within a cloud. Focusing on the immersion and condensation modes of freezing (where an INP must be immersed within a liquid drop before it can freeze) we show that the presence of CCN can suppress the formation of ice. CCN are more hydrophilic than IN and as such are better able to compete for water vapour than, typically insoluble, INPs. Therefore water is more likely to condense onto a CCN than INP, leaving the INP without enough condensed water on it to be able to freeze in the immersion or condensation mode. The magnitude of this suppression effect strongly depends on a currently unconstrained quantity. Here we refer to this quantity as the critical mass of condensed water required for freezing, Mwc. Mwc is the threshold amount of water that must be condensed onto a INP before it can freeze in the immersion or condensation mode. Using the detailed cloud parcel model, Aerosol-Cloud-Precipiation-Interaction Model (ACPIM), developed at the University of Manchester we show that if only a small amount of water is required for freezing there is little suppression effect and if a large amount of water is required there is a large suppression effect. In this poster possible ways to constrain Mwc are discussed as well as conditions where the suppression effect is likely to be greatest. Key Words: Clouds, aerosol, CCN, IN, modelling
Forecasting wildland fire behavior using high-resolution large-eddy simulations
NASA Astrophysics Data System (ADS)
Munoz-Esparza, D.; Kosovic, B.; Jimenez, P. A.; Anderson, A.; DeCastro, A.; Brown, B.
2016-12-01
Wildland fires are responsible for large socio-economic impacts. Fires affect the environment, damage structures, threaten lives, cause health issues, and involve large suppression costs. These impacts can be mitigated via accurate fire spread forecast to inform the incident management team. To this end, the state of Colorado is funding the development of the Colorado Fire Prediction System (CO-FPS). The system is based on the Weather Research and Forecasting (WRF) model enhanced with a fire behavior module (WRF-Fire). Realistic representation of wildland fire behavior requires explicit representation of small scale weather phenomena to properly account for coupled atmosphere-wildfire interactions. Moreover, transport and dispersion of biomass burning emissions from wildfires is controlled by turbulent processes in the atmospheric boundary layer, which are difficult to parameterize and typically lead to large errors when simplified source estimation and injection height methods are used. Therefore, we utilize turbulence-resolving large-eddy simulations at a resolution of 111 m to forecast fire spread and smoke distribution using a coupled atmosphere-wildfire model. This presentation will describe our improvements to the level-set based fire-spread algorithm in WRF-Fire and an evaluation of the operational system using 12 wildfire events that occurred in Colorado in 2016, as well as other historical fires. In addition, the benefits of explicit representation of turbulence for smoke transport and dispersion will be demonstrated.
Ordering Unstructured Meshes for Sparse Matrix Computations on Leading Parallel Systems
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Li, Xiaoye; Heber, Gerd; Biswas, Rupak
2000-01-01
The ability of computers to solve hitherto intractable problems and simulate complex processes using mathematical models makes them an indispensable part of modern science and engineering. Computer simulations of large-scale realistic applications usually require solving a set of non-linear partial differential equations (PDES) over a finite region. For example, one thrust area in the DOE Grand Challenge projects is to design future accelerators such as the SpaHation Neutron Source (SNS). Our colleagues at SLAC need to model complex RFQ cavities with large aspect ratios. Unstructured grids are currently used to resolve the small features in a large computational domain; dynamic mesh adaptation will be added in the future for additional efficiency. The PDEs for electromagnetics are discretized by the FEM method, which leads to a generalized eigenvalue problem Kx = AMx, where K and M are the stiffness and mass matrices, and are very sparse. In a typical cavity model, the number of degrees of freedom is about one million. For such large eigenproblems, direct solution techniques quickly reach the memory limits. Instead, the most widely-used methods are Krylov subspace methods, such as Lanczos or Jacobi-Davidson. In all the Krylov-based algorithms, sparse matrix-vector multiplication (SPMV) must be performed repeatedly. Therefore, the efficiency of SPMV usually determines the eigensolver speed. SPMV is also one of the most heavily used kernels in large-scale numerical simulations.
Forecasting wildland fire behavior using high-resolution large-eddy simulations
NASA Astrophysics Data System (ADS)
Munoz-Esparza, D.; Kosovic, B.; Jimenez, P. A.; Anderson, A.; DeCastro, A.; Brown, B.
2017-12-01
Wildland fires are responsible for large socio-economic impacts. Fires affect the environment, damage structures, threaten lives, cause health issues, and involve large suppression costs. These impacts can be mitigated via accurate fire spread forecast to inform the incident management team. To this end, the state of Colorado is funding the development of the Colorado Fire Prediction System (CO-FPS). The system is based on the Weather Research and Forecasting (WRF) model enhanced with a fire behavior module (WRF-Fire). Realistic representation of wildland fire behavior requires explicit representation of small scale weather phenomena to properly account for coupled atmosphere-wildfire interactions. Moreover, transport and dispersion of biomass burning emissions from wildfires is controlled by turbulent processes in the atmospheric boundary layer, which are difficult to parameterize and typically lead to large errors when simplified source estimation and injection height methods are used. Therefore, we utilize turbulence-resolving large-eddy simulations at a resolution of 111 m to forecast fire spread and smoke distribution using a coupled atmosphere-wildfire model. This presentation will describe our improvements to the level-set based fire-spread algorithm in WRF-Fire and an evaluation of the operational system using 12 wildfire events that occurred in Colorado in 2016, as well as other historical fires. In addition, the benefits of explicit representation of turbulence for smoke transport and dispersion will be demonstrated.
Voluminous low-T granite: fluid present partial melting of the crust?
NASA Astrophysics Data System (ADS)
Hand, Martin; Barovich, Karin; Morrissey, Laura; Bockmann, Kiara; Kelsey, David; Williams, Megan
2017-04-01
Voluminous low-T granite: fluid present partial melting of the crust? Martin Hand(1), Karin Barovich(1), Laura Morrissey(1), Vicki Lau(1), Kiara Bockmann(1), David Kelsey(1), Megan Williams(1) (1) Department of Earth Sciences, University of Adelaide, Adelaide, Australia Two general schools of thought exist for the formation of granites from predominantly crustal sources. One is that large-scale anatexis occurs via fluid-absent partial melting. This essentially thermal argument is based on the reasonable premise that the lower crust is typically fluid depleted, and experimental evidence which indicates that fluid-absent partial melting can produce significant volumes of melt, creating compositionally depleted residua that many believe are recorded by granulite facies terranes. The other school of thought is that large-scale anatexis can occur via fluid-fluxed melting. This essentially compositional-based contention is also supported by experimental evidence which shows that fluid-fluxed melting is efficient, including at temperatures not much above the solidus. However, generating significant volumes of melt at low temperatures requires a large reservoir of fluid. If fluid-fluxed melting is a realistic model, the resultant granites should be comparatively low temperature compared to those derived from predominantly fluid-absent partial melting. Using a voluminous suite of aluminous granites in the Aileron Province in the North Australian Craton together with metasedimentary granulites as models for source behaviour, we evaluate fluid-absent verse fluid-present regimes for generating large volumes of crustally-derived melt. The central Aileron Province granites occupy 32,500km2, and in places are in excess of 8 km thick. They are characterised by abundant zircon inheritance that can be matched with metasedimentary successions in the region, suggesting they were derived in large part from melting of crust similar to that presently exposed. A notable feature of many of the granites is their enriched Th concentrations compared to typical Aileron Province sub solidus metapelitic successions. However, based on continuous transects within metasedimentary rocks from a number of different regions that record transitions from sub-solidus assemblages to supra-solidus rocks petrologically characterised by typical fluid-absent peritectic assemblages (central Aileron Province, Broken Hill Zone, Ivrea-Verbano Zone), fluid-absent partial melting does not deplete Th concentrations in the residuum with respect to their sub-solidus protoliths. If these compositional transects are used as a guide to the general behaviour of Th during fluid-absent partial melting, the voluminous Th-enriched granites in the Aileron Province are unlikely to be the products of fluid-absent partial melting. This contention is supported by phase equilibria modelling of sub-solidus metasedimentary units whose detrital zircons match in age the granite-hosted xenocrysts, which indicate that temperatures in excess of 840°C are required to generate significant volumes (ie ≥ 30%) of melt under fluid-absent conditions. However, zircon saturation temperatures for the granites have a weighted mean of 776 ± 4 °C (n = 220). Because the granites contain abundant inheritance, this is an upper-T limit that also suggests fluid-absent partial melting was not the primary mechanism for granite formation. We suggest that voluminous granite formation in the Aileron Province occurred in a fluid-rich regime that was particularly effective at destabilising monazite and liberating Th into melt. Because of the propensity of monazite to destabilise in the presence of fluid, we suggest that high-grade metasedimentary terrains that are notably depleted in Th may be residuum associated with fluid-fluxed melt loss.
LabVIEW: a software system for data acquisition, data analysis, and instrument control.
Kalkman, C J
1995-01-01
Computer-based data acquisition systems play an important role in clinical monitoring and in the development of new monitoring tools. LabVIEW (National Instruments, Austin, TX) is a data acquisition and programming environment that allows flexible acquisition and processing of analog and digital data. The main feature that distinguishes LabVIEW from other data acquisition programs is its highly modular graphical programming language, "G," and a large library of mathematical and statistical functions. The advantage of graphical programming is that the code is flexible, reusable, and self-documenting. Subroutines can be saved in a library and reused without modification in other programs. This dramatically reduces development time and enables researchers to develop or modify their own programs. LabVIEW uses a large amount of processing power and computer memory, thus requiring a powerful computer. A large-screen monitor is desirable when developing larger applications. LabVIEW is excellently suited for testing new monitoring paradigms, analysis algorithms, or user interfaces. The typical LabVIEW user is the researcher who wants to develop a new monitoring technique, a set of new (derived) variables by integrating signals from several existing patient monitors, closed-loop control of a physiological variable, or a physiological simulator.
Multilevel UQ strategies for large-scale multiphysics applications: PSAAP II solar receiver
NASA Astrophysics Data System (ADS)
Jofre, Lluis; Geraci, Gianluca; Iaccarino, Gianluca
2017-06-01
Uncertainty quantification (UQ) plays a fundamental part in building confidence in predictive science. Of particular interest is the case of modeling and simulating engineering applications where, due to the inherent complexity, many uncertainties naturally arise, e.g. domain geometry, operating conditions, errors induced by modeling assumptions, etc. In this regard, one of the pacing items, especially in high-fidelity computational fluid dynamics (CFD) simulations, is the large amount of computing resources typically required to propagate incertitude through the models. Upcoming exascale supercomputers will significantly increase the available computational power. However, UQ approaches cannot entrust their applicability only on brute force Monte Carlo (MC) sampling; the large number of uncertainty sources and the presence of nonlinearities in the solution will make straightforward MC analysis unaffordable. Therefore, this work explores the multilevel MC strategy, and its extension to multi-fidelity and time convergence, to accelerate the estimation of the effect of uncertainties. The approach is described in detail, and its performance demonstrated on a radiated turbulent particle-laden flow case relevant to solar energy receivers (PSAAP II: Particle-laden turbulence in a radiation environment). Investigation funded by DoE's NNSA under PSAAP II.
NASA Astrophysics Data System (ADS)
Liu, Hung-Wei
Organic electronic materials and processing techniques have attracted considerable attention for developing organic thin-film transistors (OTFTs), since they may be patterned on flexible substrates which may be bent into a variety of shapes for applications such as displays, smart cards, solar devices and sensors Various fabrication methods for building pentacene-based OTFTs have been demonstrated. Traditional vacuum deposition and vapor deposition methods have been studied for deposition on plastic and paper, but these are unlikely to scale well to large area printing. Researchers have developed methods for processing OTFTs from solution because of the potential for low-cost and large area device manufacturing, such as through inkjet or offset printing. Most methods require the use of precursors which are used to make pentacene soluble, and these methods have typically produced much lower carrier mobility than the best vacuum deposited devices. We have investigated devices built from solution-processed pentacene that is locally crystallized at room temperature on the polymer substrates. Pentacene crystals grown in this manner are highly localized at pre-determined sites, have good crystallinity and show good carrier mobility, making this an attractive method for large area manufacturing of semiconductor devices.
NASA Astrophysics Data System (ADS)
McLaughlin, B. D.; Pawloski, A. W.
2015-12-01
Modern development practices require the ability to quickly and easily host an application. Small projects cannot afford to maintain a large staff for infrastructure maintenance. Rapid prototyping fosters innovation. However, maintaining the integrity of data and systems demands care, particularly in a government context. The extensive data holdings that make up much of the value of NASA's EOSDIS (Earth Observing System Data and Information System) are stored in a number of locations, across a wide variety of applications, ranging from small prototypes to large computationally-intensive operational processes.However, it is increasingly difficult for an application to implement the required security controls, perform required registrations and inventory entries, ensure logging, monitoring, patching, and then ensure that all these activities continue for the life of that application, let alone five, or ten, or fifty applications. This process often takes weeks or months to complete and requires expertise in a variety of different domains such as security, systems administration, development, etc.NGAP, the Next Generation Application Platform, is tackling this problem by investigating, automating, and resolving many of the repeatable policy hurdles that a typical application must overcome. This platform provides a relatively simple and straightforward process by which applications can commit source code to a repository and then deploy that source code to a cloud-based infrastructure, all while meeting NASA's policies for security, governance, inventory, reliability, and availability. While there is still work for the application owner for any application hosting, NGAP handles a significant portion of that work.This talk will discuss areas where we have made significant progress, areas that are complex or must remain human-intensive, and areas where we are still striving to improve this application deployment and hosting pipeline.
An automated approach towards detecting complex behaviours in deep brain oscillations.
Mace, Michael; Yousif, Nada; Naushahi, Mohammad; Abdullah-Al-Mamun, Khondaker; Wang, Shouyan; Nandi, Dipankar; Vaidyanathan, Ravi
2014-03-15
Extracting event-related potentials (ERPs) from neurological rhythms is of fundamental importance in neuroscience research. Standard ERP techniques typically require the associated ERP waveform to have low variance, be shape and latency invariant and require many repeated trials. Additionally, the non-ERP part of the signal needs to be sampled from an uncorrelated Gaussian process. This limits methods of analysis to quantifying simple behaviours and movements only when multi-trial data-sets are available. We introduce a method for automatically detecting events associated with complex or large-scale behaviours, where the ERP need not conform to the aforementioned requirements. The algorithm is based on the calculation of a detection contour and adaptive threshold. These are combined using logical operations to produce a binary signal indicating the presence (or absence) of an event with the associated detection parameters tuned using a multi-objective genetic algorithm. To validate the proposed methodology, deep brain signals were recorded from implanted electrodes in patients with Parkinson's disease as they participated in a large movement-based behavioural paradigm. The experiment involved bilateral recordings of local field potentials from the sub-thalamic nucleus (STN) and pedunculopontine nucleus (PPN) during an orientation task. After tuning, the algorithm is able to extract events achieving training set sensitivities and specificities of [87.5 ± 6.5, 76.7 ± 12.8, 90.0 ± 4.1] and [92.6 ± 6.3, 86.0 ± 9.0, 29.8 ± 12.3] (mean ± 1 std) for the three subjects, averaged across the four neural sites. Furthermore, the methodology has the potential for utility in real-time applications as only a single-trial ERP is required. Copyright © 2013 Elsevier B.V. All rights reserved.
Developing Deep Learning Applications for Life Science and Pharma Industry.
Siegismund, Daniel; Tolkachev, Vasily; Heyse, Stephan; Sick, Beate; Duerr, Oliver; Steigele, Stephan
2018-06-01
Deep Learning has boosted artificial intelligence over the past 5 years and is seen now as one of the major technological innovation areas, predicted to replace lots of repetitive, but complex tasks of human labor within the next decade. It is also expected to be 'game changing' for research activities in pharma and life sciences, where large sets of similar yet complex data samples are systematically analyzed. Deep learning is currently conquering formerly expert domains especially in areas requiring perception, previously not amenable to standard machine learning. A typical example is the automated analysis of images which are typically produced en-masse in many domains, e. g., in high-content screening or digital pathology. Deep learning enables to create competitive applications in so-far defined core domains of 'human intelligence'. Applications of artificial intelligence have been enabled in recent years by (i) the massive availability of data samples, collected in pharma driven drug programs (='big data') as well as (ii) deep learning algorithmic advancements and (iii) increase in compute power. Such applications are based on software frameworks with specific strengths and weaknesses. Here, we introduce typical applications and underlying frameworks for deep learning with a set of practical criteria for developing production ready solutions in life science and pharma research. Based on our own experience in successfully developing deep learning applications we provide suggestions and a baseline for selecting the most suited frameworks for a future-proof and cost-effective development. © Georg Thieme Verlag KG Stuttgart · New York.
Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model
Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.
2013-01-01
One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874
Demand Shifting with Thermal Mass in Large Commercial Buildings in a California Hot Climate Zone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Peng; Yin, Rongxin; Brown, Carrie
2009-06-01
The potential for using building thermal mass for load shifting and peak energy demand reduction has been demonstrated in a number of simulation, laboratory, and field studies. Previous Lawrence Berkeley National Laboratory research has demonstrated that the approach is very effective in cool and moderately warm climate conditions (California Climate Zones 2-4). However, this method had not been tested in hotter climate zones. This project studied the potential of pre-cooling the building early in the morning and increasing temperature setpoints during peak hours to reduce cooling-related demand in two typical office buildings in hotter California climates ? one in Visaliamore » (CEC Climate Zone 13) and the other in San Bernardino (CEC Climate Zone 10). The conclusion of the work to date is that pre-cooling in hotter climates has similar potential to that seen previously in cool and moderate climates. All other factors being equal, results to date indicate that pre-cooling increases the depth (kW) and duration (kWh) of the possible demand shed of a given building. The effectiveness of night pre-cooling in typical office building under hot weather conditions is very limited. However, night pre-cooling is helpful for office buildings with an undersized HVAC system. Further work is required to duplicate the tests in other typical buildings and in other hot climate zones and prove that pre-cooling is truly effective.« less
Developing a mixture design specification for flexible base construction.
DOT National Transportation Integrated Search
2012-06-01
In the Texas Department of Transportation (TxDOT), flexible base producers typically generate large stockpiles of material exclusively for TxDOT projects. This large state-only inventory often maintained by producers, along with time requiremen...
An ISVD-based Euclidian structure from motion for smartphones
NASA Astrophysics Data System (ADS)
Masiero, A.; Guarnieri, A.; Vettore, A.; Pirotti, F.
2014-06-01
The development of Mobile Mapping systems over the last decades allowed to quickly collect georeferenced spatial measurements by means of sensors mounted on mobile vehicles. Despite the large number of applications that can potentially take advantage of such systems, because of their cost their use is currently typically limited to certain specialized organizations, companies, and Universities. However, the recent worldwide diffusion of powerful mobile devices typically embedded with GPS, Inertial Navigation System (INS), and imaging sensors is enabling the development of small and compact mobile mapping systems. More specifically, this paper considers the development of a 3D reconstruction system based on photogrammetry methods for smartphones (or other similar mobile devices). The limited computational resources available in such systems and the users' request for real time reconstructions impose very stringent requirements on the computational burden of the 3D reconstruction procedure. This work takes advantage of certain recently developed mathematical tools (incremental singular value decomposition) and of photogrammetry techniques (structure from motion, Tomasi-Kanade factorization) to access very computationally efficient Euclidian 3D reconstruction of the scene. Furthermore, thanks to the presence of instrumentation for localization embedded in the device, the obtained 3D reconstruction can be properly georeferenced.
Han, Songshan; Jiao, Zongxia; Yao, Jianyong; Shang, Yaoxing
2014-09-01
An electro-hydraulic load simulator (EHLS) is a typical case of torque systems with strong external disturbances from hydraulic motion systems. A new velocity synchronizing compensation strategy is proposed in this paper to eliminate motion disturbances, based on theoretical and experimental analysis of a structure invariance method and traditional velocity synchronizing compensation controller (TVSM). This strategy only uses the servo-valve's control signal of motion system and torque feedback of torque system, which could avoid the requirement on the velocity and acceleration signal in the structure invariance method, and effectively achieve a more accurate velocity synchronizing compensation in large loading conditions than a TVSM. In order to facilitate the implementation of this strategy in engineering cases, the selection rules for compensation parameters are proposed. It does not rely on any accurate information of structure parameters. This paper presents the comparison data of an EHLS with various typical operating conditions using three controllers, i.e., closed loop proportional integral derivative (PID) controller, TVSM, and the proposed improved velocity synchronizing controller. Experiments are conducted to confirm that the new strategy performs well against motion disturbances. It is more effective to improve the tracking accuracy and is a more appropriate choice for engineering applications.
NASA Astrophysics Data System (ADS)
Chan, Chun-Kai; Loh, Chin-Hsiung; Wu, Tzu-Hsiu
2015-04-01
In civil engineering, health monitoring and damage detection are typically carry out by using a large amount of sensors. Typically, most methods require global measurements to extract the properties of the structure. However, some sensors, like LVDT, cannot be used due to in situ limitation so that the global deformation remains unknown. An experiment is used to demonstrate the proposed algorithms: a one-story 2-bay reinforce concrete frame under weak and strong seismic excitation. In this paper signal processing techniques and nonlinear identification are used and applied to the response measurements of seismic response of reinforced concrete structures subject to different level of earthquake excitations. Both modal-based and signal-based system identification and feature extraction techniques are used to study the nonlinear inelastic response of RC frame using both input and output response data or output only measurement. From the signal-based damage identification method, which include the enhancement of time-frequency analysis of acceleration responses and the estimation of permanent deformation using directly from acceleration response data. Finally, local deformation measurement from dense optical tractor is also use to quantify the damage of the RC frame structure.
Computer model to simulate testing at the National Transonic Facility
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.
1995-01-01
A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.
Importance of balanced architectures in the design of high-performance imaging systems
NASA Astrophysics Data System (ADS)
Sgro, Joseph A.; Stanton, Paul C.
1999-03-01
Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.
Manual fire suppression methods on typical machinery space spray fires
NASA Astrophysics Data System (ADS)
Carhart, H. W.; Leonard, J. T.; Budnick, E. K.; Ouellette, R. J.; Shanley, J. H., Jr.
1990-07-01
A series of tests was conducted to evaluate the effectiveness of Aqueous Film Forming Foam (AFFF), potassium bicarbonate powder (PKP) and Halon 1211, alone and in various combinations, in extinguishing spray fires. The sprays were generated by JP-5 jet fuel issuing from an open sounding tube, and open petcock, a leaking flange or a slit pipe, and contacting an ignition source. The results indicate that typical fuel spray fires, such as those simulated in this series, are very severe. Flame heights ranged from 6.1 m (20 ft) for the split pipe to 15.2 m (50 ft) for the sounding tube scenario. These large flame geometries were accompanied by heat release rates of 6 MW to greater than 50 MW, and hazardous thermal radiation levels in the near field environment, up to 9.1 m (30 ft) away. Successful suppression of these fires requires both a significant reduction in flame radiation and delivery of a suppression agent to shielded areas. Of the nine suppression methods tested, the 95 gpm AFFF hand line and the hand line in conjunction with PKP were particularly effective in reducing the radiant flux.
Comparing a discrete and continuum model of the intestinal crypt
Murray, Philip J.; Walter, Alex; Fletcher, Alex G.; Edwards, Carina M.; Tindall, Marcus J.; Maini, Philip K.
2011-01-01
The integration of processes at different scales is a key problem in the modelling of cell populations. Owing to increased computational resources and the accumulation of data at the cellular and subcellular scales, the use of discrete, cell-level models, which are typically solved using numerical simulations, has become prominent. One of the merits of this approach is that important biological factors, such as cell heterogeneity and noise, can be easily incorporated. However, it can be difficult to efficiently draw generalisations from the simulation results, as, often, many simulation runs are required to investigate model behaviour in typically large parameter spaces. In some cases, discrete cell-level models can be coarse-grained, yielding continuum models whose analysis can lead to the development of insight into the underlying simulations. In this paper we apply such an approach to the case of a discrete model of cell dynamics in the intestinal crypt. An analysis of the resulting continuum model demonstrates that there is a limited region of parameter space within which steady-state (and hence biologically realistic) solutions exist. Continuum model predictions show good agreement with corresponding results from the underlying simulations and experimental data taken from murine intestinal crypts. PMID:21411869
NASA Technical Reports Server (NTRS)
Albright, A. E.
1984-01-01
A glycol-exuding porous leading edge ice protection system was tested in the NASA Icing Research Tunnel. Stainless steel mesh, laser drilled titanium, and composite panels were tested on two general aviation wing sections. Two different glycol-water solutions were evaluated. Minimum glycol flow rates required for anti-icing were obtained as a function of angle of attack, liquid water content, volume median drop diameter, temperature, and velocity. Ice accretions formed after five minutes of icing were shed in three minutes or less using a glycol fluid flow equal to the anti-ice flow rate. Two methods of predicting anti-ice flow rates are presented and compared with a large experimental data base of anti-ice flow rates over a wide range of icing conditions. The first method presented in the ADS-4 document typically predicts flow rates lower than the experimental flow rates. The second method, originally published in 1983, typically predicts flow rates up to 25 percent higher than the experimental flow rates. This method proved to be more consistent between wing-panel configurations. Significant correlation coefficients between the predicted flow rates and the experimental flow rates ranged from .867 to .947.
Environmental Limits of Tall Shrubs in Alaska’s Arctic National Parks
Swanson, David K.
2015-01-01
We sampled shrub canopy volume (height times area) and environmental factors (soil wetness, soil depth of thaw, soil pH, mean July air temperature, and typical date of spring snow loss) on 471 plots across five National Park Service units in northern Alaska. Our goal was to determine the environments where tall shrubs thrive and use this information to predict the location of future shrub expansion. The study area covers over 80,000 km2 and has mostly tundra vegetation. Large canopy volumes were uncommon, with volumes over 0.5 m3/m2 present on just 8% of plots. Shrub canopy volumes were highest where mean July temperatures were above 10.5°C and on weakly acid to neutral soils (pH of 6 to 7) with deep summer thaw (>80 cm) and good drainage. On many sites, flooding helped maintain favorable soil conditions for shrub growth. Canopy volumes were highest where the typical snow loss date was near 20 May; these represent sites that are neither strongly wind-scoured in the winter nor late to melt from deep snowdrifts. Individual species varied widely in the canopy volumes they attained and their response to the environmental factors. Betula sp. shrubs were the most common and quite tolerant of soil acidity, cold July temperatures, and shallow thaw depths, but they did not form high-volume canopies under these conditions. Alnus viridis formed the largest canopies and was tolerant of soil acidity down to about pH 5, but required more summer warmth (over 12°C) than the other species. The Salix species varied widely from S. pulchra, tolerant of wet and moderately acid soils, to S. alaxensis, requiring well-drained soils with near neutral pH. Nearly half of the land area in ARCN has mean July temperatures of 10.5 to 12.5°C, where 2°C of warming would bring temperatures into the range needed for all of the potential tall shrub species to form large canopies. However, limitations in the other environmental factors would probably prevent the formation of large shrub canopies on at least half of the land area with newly favorable temperatures after 2°C of warming. PMID:26379243
Environmental Limits of Tall Shrubs in Alaska's Arctic National Parks.
Swanson, David K
2015-01-01
We sampled shrub canopy volume (height times area) and environmental factors (soil wetness, soil depth of thaw, soil pH, mean July air temperature, and typical date of spring snow loss) on 471 plots across five National Park Service units in northern Alaska. Our goal was to determine the environments where tall shrubs thrive and use this information to predict the location of future shrub expansion. The study area covers over 80,000 km2 and has mostly tundra vegetation. Large canopy volumes were uncommon, with volumes over 0.5 m3/m2 present on just 8% of plots. Shrub canopy volumes were highest where mean July temperatures were above 10.5°C and on weakly acid to neutral soils (pH of 6 to 7) with deep summer thaw (>80 cm) and good drainage. On many sites, flooding helped maintain favorable soil conditions for shrub growth. Canopy volumes were highest where the typical snow loss date was near 20 May; these represent sites that are neither strongly wind-scoured in the winter nor late to melt from deep snowdrifts. Individual species varied widely in the canopy volumes they attained and their response to the environmental factors. Betula sp. shrubs were the most common and quite tolerant of soil acidity, cold July temperatures, and shallow thaw depths, but they did not form high-volume canopies under these conditions. Alnus viridis formed the largest canopies and was tolerant of soil acidity down to about pH 5, but required more summer warmth (over 12°C) than the other species. The Salix species varied widely from S. pulchra, tolerant of wet and moderately acid soils, to S. alaxensis, requiring well-drained soils with near neutral pH. Nearly half of the land area in ARCN has mean July temperatures of 10.5 to 12.5°C, where 2°C of warming would bring temperatures into the range needed for all of the potential tall shrub species to form large canopies. However, limitations in the other environmental factors would probably prevent the formation of large shrub canopies on at least half of the land area with newly favorable temperatures after 2°C of warming.
Face-to-face interference in typical and atypical development
Riby, Deborah M; Doherty-Sneddon, Gwyneth; Whittle, Lisa
2012-01-01
Visual communication cues facilitate interpersonal communication. It is important that we look at faces to retrieve and subsequently process such cues. It is also important that we sometimes look away from faces as they increase cognitive load that may interfere with online processing. Indeed, when typically developing individuals hold face gaze it interferes with task completion. In this novel study we quantify face interference for the first time in Williams syndrome (WS) and Autism Spectrum Disorder (ASD). These disorders of development impact on cognition and social attention, but how do faces interfere with cognitive processing? Individuals developing typically as well as those with ASD (n = 19) and WS (n = 16) were recorded during a question and answer session that involved mathematics questions. In phase 1 gaze behaviour was not manipulated, but in phase 2 participants were required to maintain eye contact with the experimenter at all times. Looking at faces decreased task accuracy for individuals who were developing typically. Critically, the same pattern was seen in WS and ASD, whereby task performance decreased when participants were required to hold face gaze. The results show that looking at faces interferes with task performance in all groups. This finding requires the caveat that individuals with WS and ASD found it harder than individuals who were developing typically to maintain eye contact throughout the interaction. Individuals with ASD struggled to hold eye contact at all points of the interaction while those with WS found it especially difficult when thinking. PMID:22356183
Automated Design of Restraint Layer of an Inflatable Vessel
NASA Technical Reports Server (NTRS)
Spexarth, Gary
2007-01-01
A Mathcad computer program largely automates the design and analysis of the restraint layer (the primary load-bearing layer) of an inflatable vessel that consists of one or more sections having cylindrical, toroidal, and/or spherical shape(s). A restraint layer typically comprises webbing in the form of multiple straps. The design task includes choosing indexing locations along the straps, computing the load at every location in each strap, computing the resulting stretch at each location, and computing the amount of undersizing required of each strap so that, once the vessel is inflated and the straps thus stretched, the vessel can be expected to assume the desired shape. Prior to the development of this program, the design task was performed by use of a difficult-to-use spreadsheet program that required manual addition of rows and columns depending on the numbers of strap rows and columns of a given design. In contrast, this program is completely parametric and includes logic that automatically adds or deletes rows and columns as needed. With minimal input from the user, this program automatically computes indexing locations, strap lengths, undersizing requirements, and all design data required to produce detailed drawings and assembly procedures. It also generates textual comments that help the user understand the calculations.
Collaborative Manufacturing for Small-Medium Enterprises
NASA Astrophysics Data System (ADS)
Irianto, D.
2016-02-01
Manufacturing systems involve decisions concerning production processes, capacity, planning, and control. In a MTO manufacturing systems, strategic decisions concerning fulfilment of customer requirement, manufacturing cost, and due date of delivery are the most important. In order to accelerate the decision making process, research on decision making structure when receiving order and sequencing activities under limited capacity is required. An effective decision making process is typically required by small-medium components and tools maker as supporting industries to large industries. On one side, metal small-medium enterprises are expected to produce parts, components or tools (i.e. jigs, fixture, mold, and dies) with high precision, low cost, and exact delivery time. On the other side, a metal small- medium enterprise may have weak bargaining position due to aspects such as low production capacity, limited budget for material procurement, and limited high precision machine and equipment. Instead of receiving order exclusively, a small-medium enterprise can collaborate with other small-medium enterprise in order to fulfill requirements high quality, low manufacturing cost, and just in time delivery. Small-medium enterprises can share their best capabilities to form effective supporting industries. Independent body such as community service at university can take a role as a collaboration manager. The Laboratory of Production Systems at Bandung Institute of Technology has implemented shared manufacturing systems for small-medium enterprise collaboration.
First test of a high voltage feedthrough for liquid Argon TPCs connected to a 300 kV power supply
NASA Astrophysics Data System (ADS)
Cantini, C.; Gendotti, A.; Molina Bueno, L.; Murphy, S.; Radics, B.; Regenfus, C.; Rigaut, Y.-A.; Rubbia, A.; Sergiampietri, F.; Viant, T.; Wu, S.
2017-03-01
Voltages above a hundred kilo-volt will be required to generate the drift field of future very large liquid Argon Time Projection Chambers. One of the most delicate component is the feedthrough whose role is to safely deliver the very high voltage to the cathode through the thick insulating walls of the cryostat without compromising the purity of the argon inside. This requires a feedthrough that is typically meters long and carefully designed to be vacuum tight and have small heat input. Furthermore, all materials should be carefully chosen to allow operation in cryogenic conditions. In addition, electric fields in liquid argon should be kept below a threshold to reduce risks of discharges. The combination of all above requirements represents significant challenges from the design and manufacturing perspective. In this paper, we report on the successful operation of a feedthrough satisfying all the above requirements. The details of the feedthrough design and its manufacturing steps are provided. Very high voltages up to unprecedented voltages of -300 kV could be applied during long periods repeatedly. A source of instability was observed, which was specific to the setup configuration which was used for the test and not due to the feedthrough itself.
NASA Technical Reports Server (NTRS)
Hart, Angela
2006-01-01
A description of internal cargo integration is presented. The topics include: 1) Typical Cargo for Launch/Disposal; 2) Cargo Delivery Requirements; 3) Cargo Return Requirements; and 4) Vehicle On-Orbit Stay Time.
Small and Large Number Processing in Infants and Toddlers with Williams Syndrome
ERIC Educational Resources Information Center
Van Herwegen, Jo; Ansari, Daniel; Xu, Fei; Karmiloff-Smith, Annette
2008-01-01
Previous studies have suggested that typically developing 6-month-old infants are able to discriminate between small and large numerosities. However, discrimination between small numerosities in young infants is only possible when variables continuous with number (e.g. area or circumference) are confounded. In contrast, large number discrimination…
NASA Astrophysics Data System (ADS)
Doyle, Martin W.; Singh, Jai; Lave, Rebecca; Robertson, Morgan M.
2015-07-01
We use geomorphic surveys to quantify the differences between restored and nonrestored streams, and the difference between streams restored for market purposes (compensatory mitigation) from those restored for nonmarket programs. We also analyze the social and political-economic drivers of the stream restoration and mitigation industry using analysis of policy documents and interviews with key personnel including regulators, mitigation bankers, stream designers, and scientists. Restored streams are typically wider and geomorphically more homogenous than nonrestored streams. Streams restored for the mitigation market are typically headwater streams and part of a large, complex of long restored main channels, and many restored tributaries; streams restored for nonmarket purposes are typically shorter and consist of the main channel only. Interviews reveal that designers integrate many influences including economic and regulatory constraints, but traditions of practice have a large influence as well. Thus, social forces shape the morphology of restored streams.
Fair, Damien A.; Bathula, Deepti; Nikolas, Molly A.; Nigg, Joel T.
2012-01-01
Research and clinical investigations in psychiatry largely rely on the de facto assumption that the diagnostic categories identified in the Diagnostic and Statistical Manual (DSM) represent homogeneous syndromes. However, the mechanistic heterogeneity that potentially underlies the existing classification scheme might limit discovery of etiology for most developmental psychiatric disorders. Another, perhaps less palpable, reality may also be interfering with progress—heterogeneity in typically developing populations. In this report we attempt to clarify neuropsychological heterogeneity in a large dataset of typically developing youth and youth with attention deficit/hyperactivity disorder (ADHD), using graph theory and community detection. We sought to determine whether data-driven neuropsychological subtypes could be discerned in children with and without the disorder. Because individual classification is the sine qua non for eventual clinical translation, we also apply support vector machine-based multivariate pattern analysis to identify how well ADHD status in individual children can be identified as defined by the community detection delineated subtypes. The analysis yielded several unique, but similar subtypes across both populations. Just as importantly, comparing typically developing children with ADHD children within each of these distinct subgroups increased diagnostic accuracy. Two important principles were identified that have the potential to advance our understanding of typical development and developmental neuropsychiatric disorders. The first tenet suggests that typically developing children can be classified into distinct neuropsychological subgroups with high precision. The second tenet proposes that some of the heterogeneity in individuals with ADHD might be “nested” in this normal variation. PMID:22474392
NASA Astrophysics Data System (ADS)
Cabell, R.; Delle Monache, L.; Alessandrini, S.; Rodriguez, L.
2015-12-01
Climate-based studies require large amounts of data in order to produce accurate and reliable results. Many of these studies have used 30-plus year data sets in order to produce stable and high-quality results, and as a result, many such data sets are available, generally in the form of global reanalyses. While the analysis of these data lead to high-fidelity results, its processing can be very computationally expensive. This computational burden prevents the utilization of these data sets for certain applications, e.g., when rapid response is needed in crisis management and disaster planning scenarios resulting from release of toxic material in the atmosphere. We have developed a methodology to reduce large climate datasets to more manageable sizes while retaining statistically similar results when used to produce ensembles of possible outcomes. We do this by employing a Self-Organizing Map (SOM) algorithm to analyze general patterns of meteorological fields over a regional domain of interest to produce a small set of "typical days" with which to generate the model ensemble. The SOM algorithm takes as input a set of vectors and generates a 2D map of representative vectors deemed most similar to the input set and to each other. Input predictors are selected that are correlated with the model output, which in our case is an Atmospheric Transport and Dispersion (T&D) model that is highly dependent on surface winds and boundary layer depth. To choose a subset of "typical days," each input day is assigned to its closest SOM map node vector and then ranked by distance. Each node vector is treated as a distribution and days are sampled from them by percentile. Using a 30-node SOM, with sampling every 20th percentile, we have been able to reduce 30 years of the Climate Forecast System Reanalysis (CFSR) data for the month of October to 150 "typical days." To estimate the skill of this approach, the "Measure of Effectiveness" (MOE) metric is used to compare area and overlap of statistical exceedance between the reduced data set and the full 30-year CFSR dataset. Using the MOE, we find that our SOM-derived climate subset produces statistics that fall within 85-90% overlap with the full set while using only 15% of the total data length, and consequently, 15% of the computational time required to run the T&D model for the full period.