Sample records for uh big nuclear

  1. UH-60M Black Hawk Helicopter (UH-60M Black Hawk)

    DTIC Science & Technology

    2016-12-01

    Selected Acquisition Report (SAR) RCS: DD-A&T(Q&A)823-341 UH-60M Black Hawk Helicopter (UH-60M Black Hawk) As of FY 2017 President’s Budget Defense...Acquisition Management Information Retrieval (DAMIR) March 21, 2016 18:25:45 UNCLASSIFIED UH-60M Black Hawk December 2015 SAR March 21, 2016 18...Operational Requirements Document OSD - Office of the Secretary of Defense O&S - Operating and Support PAUC - Program Acquisition Unit Cost UH-60M Black Hawk

  2. Prolonged restricted sitting effects in UH-60 helicopters.

    PubMed

    Games, Kenneth E; Lakin, Joni M; Quindry, John C; Weimar, Wendi H; Sefton, JoEllen M

    2015-01-01

    Advances in flight technologies and the demand for long-range flight have increased mission lengths for U.S. Army Black Hawk UH-60 crewmembers. Prolonged mission times have increased reports of pilot discomfort and symptoms of paresthesia thought to be due to UH-60 seat design and areas of locally high pressure. Discomfort created by the seat-system decreases situational awareness, putting aviators and support crew at risk of injury. Therefore, the purpose of this study was to examine the effects of prolonged restricted sitting in a UH-60 on discomfort, sensory function, and vascular measures in the lower extremities. There were 15 healthy men (age = 23.4 ± 3.1 yr) meeting physical flight status requirements who sat in an unpadded, UH-60 pilot's seat for 4 h while completing a common cognitive task. During the session, subjective discomfort, sensory function, and vascular function were measured. Across 4 h of restricted sitting, subjective discomfort increased using the Category Partitioning Scale (30.27 point increase) and McGill Pain Questionnaire (8.53 point increase); lower extremity sensory function was diminished along the S1 dermatome; and skin temperature decreased on both the lateral (2.85°C decrease) and anterior (2.78°C decrease) aspects of the ankle. The results suggest that prolonged sitting in a UH-60 seat increases discomfort, potentially through a peripheral nervous or vascular system mechanism. Further research is needed to understand the etiology and onset of pain and paresthesia during prolonged sitting in UH-60 pilot seats. Games KE, Lakin JM, Quindry JC, Weimar WH, Sefton JM. Prolonged restricted sitting effects in UH-60 helicopters.

  3. The isotype repertoire of antibodies against novel UH-RA peptides in rheumatoid arthritis.

    PubMed

    De Winter, Liesbeth M; Geusens, Piet; Lenaerts, Jan; Vanhoof, Johan; Stinissen, Piet; Somers, Veerle

    2016-06-07

    Recently, autoantibodies against novel UH-RA peptides (UH-RA.1 and UH-RA.21) were identified as candidate biomarkers for patients with rheumatoid arthritis (RA) who are seronegative for the current diagnostic markers rheumatoid factor and anticitrullinated protein antibodies. Previously, screening for anti-UH-RA autoantibodies was based on measuring the immunoglobulin (Ig) G response. We aimed to investigate whether measurement of other isotypes could improve the performance of diagnostic testing. In addition, assigning the isotype profile might provide valuable information on effector functions of the antibodies. The isotype profile of antibodies against UH-RA.1 and UH-RA.21 was studied. The IgG, IgM, and IgA classes, together with the 4 different IgG subclasses, were determined in 285 patients with RA, 88 rheumatic control subjects, and 90 healthy control subjects. Anti-UH-RA.1 antibodies were primarily of the IgM isotype and twice as prevalent as IgG (IgG3-dominated) and IgA. RA sensitivity when testing for anti-UH-RA.1 IgM was shown to be higher than when testing for the IgG isotype: 18 % versus 9 % sensitivity when RA specificity was set to 90 %. Within antibodies against UH-RA.21, IgG and IgA were more common than IgM. Different anti-UH-RA.21 IgG subclasses were found, with the highest prevalence found for IgG2. Combined testing for IgG and IgA slightly increased RA sensitivity of UH-RA.21-specific antibody testing to 27 % compared with solely testing for IgG (23 %). Notably, a higher number of anti-UH-RA.21 antibody isotypes was related to increased levels of erythrocyte sedimentation rate. Finally, for both antibody responses, the full antibody isotype use was demonstrated in early and seronegative disease. The isotype distribution of anti-UH-RA.1 and anti-UH-RA.21 antibodies was successfully outlined, and, for antibodies against UH-RA.1, we found that isotype-specific testing might have implications for diagnostic testing. The exact mechanisms by

  4. 78 FR 58570 - Environmental Assessment; Entergy Nuclear Operations, Inc., Big Rock Point

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-24

    ... Assessment; Entergy Nuclear Operations, Inc., Big Rock Point AGENCY: Nuclear Regulatory Commission. ACTION... applicant or the licensee), for the Big Rock Point (BRP) Independent Spent Fuel Storage Installation (ISFSI... Rock Point (BRP) Independent Spent Fuel Storage Installation (ISFSI). II. Environmental Assessment (EA...

  5. Uh and um revisited: are they interjections for signaling delay?

    PubMed

    O'Connell, Daniel C; Kowal, Sabine

    2005-11-01

    Clark and Fox Tree (2002) have presented empirical evidence, based primarily on the London-Lund corpus (LL; Svartvik & Quirk, 1980), that the fillers uh and um are conventional English words that signal a speaker's intention to initiate a minor and a major delay, respectively. We present here empirical analyses of uh and um and of silent pauses (delays) immediately following them in six media interviews of Hillary Clinton. Our evidence indicates that uh and um cannot serve as signals of upcoming delay, let alone signal it differentially: In most cases, both uh and um were not followed by a silent pause, that is, there was no delay at all; the silent pauses that did occur after um were too short to be counted as major delays; finally, the distributions of durations of silent pauses after uh and um were almost entirely overlapping and could therefore not have served as reliable predictors for a listener. The discrepancies between Clark and Fox Tree's findings and ours are largely a consequence of the fact that their LL analyses reflect the perceptions of professional coders, whereas our data were analyzed by means of acoustic measurements with the PRAAT software (www.praat.org). A comparison of our findings with those of O'Connell, Kowal, and Ageneau (2005) did not corroborate the hypothesis of Clark and Fox Tree that uh and um are interjections: Fillers occurred typically in initial, interjections in medial positions; fillers did not constitute an integral turn by themselves, whereas interjections did; fillers never initiated cited speech, whereas interjections did; and fillers did not signal emotion, whereas interjections did. Clark and Fox Tree's analyses were embedded within a theory of ideal delivery that we find inappropriate for the explication of these phenomena.

  6. NASA/UH signing of memorandum of understanding

    NASA Image and Video Library

    1996-10-02

    NASA/University of Houston (UH) signing of memorandum of understanding. Johnson Space Center (JSC) Director George Abbey signs a memorandum of understanding with University of Houston's President Glenn Goerke and University of Houston Clear Lake President Williams Staples. UH will supply post-doctoral researchers to JSC for more than 15 projects of scientific interest to both JSC and the university. Seated from left are, Abbey, Goerke and Staples. Standing from left are David Criswell, director of the Institute of Space Systems Operations; Texas State Representatives Michael Jackson, Robert Talton and Talmadge Heflin. View appears in Space News Roundup v35 n41 p4, 10-18-96.

  7. UH-60 Airloads Program Tutorial

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2009-01-01

    From the fall of 1993 to late winter of 1994, NASA Ames and the U.S. Army flew a flight test program using a UH-60A helicopter with extensive instrumentation on the rotor and blades, including 242 pressure transducers. Over this period, approximately 30 flights were made, and data were obtained in level flight, maneuver, ascents, and descents. Coordinated acoustic measurements were obtained with a ground-acoustic array in cooperation with NASA Langley, and in-flight acoustic measurements with a YO-3A aircraft. NASA has sponsored the creation of a "tutorial' which covers the depth and breadth of the flight test program with a mixture of text and graphics. The primary purpose of this tutorial is to introduce the student to what is known about rotor aerodynamics based on the UH-60A measurements. The tutorial will also be useful to anyone interested in helicopters who would like to have more detailed knowledge about helicopter aerodynamics.

  8. "Uh" and "Um" Revisited: Are They Interjections for Signaling Delay?

    ERIC Educational Resources Information Center

    O'Connell, Daniel C.; Kowal, Sabine

    2005-01-01

    Clark and Fox Tree (2002) have presented empirical evidence, based primarily on the London-Lund corpus (LL; Svartvik & Quirk, 1980), that the fillers "uh" and "um" are conventional English words that signal a speaker's intention to initiate a minor and a major delay, respectively. We present here empirical analyses of "uh" and "um" and of silent…

  9. 78 FR 61401 - Entergy Nuclear Operations, Inc.; Big Rock Point; Independent Spent Fuel Storage Installation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-03

    ... NUCLEAR REGULATORY COMMISSION [Docket Nos. 50-155; 72-43 and NRC-2013-0218] Entergy Nuclear Operations, Inc.; Big Rock Point; Independent Spent Fuel Storage Installation AGENCY: Nuclear Regulatory... the Big Rock Point (BRP) Independent Spent Fuel Storage Installation (ISFSI). ADDRESSES: Please refer...

  10. Measurement of the UH-60A Hub Large Rotor Test Apparatus Control System Stiffness

    NASA Technical Reports Server (NTRS)

    Kufeld, Robert M.

    2014-01-01

    This purpose of this report is to provides details of the measurement of the control system stiffness of the UH-60A rotor hub mounted on the Large Rotor Test Apparatus (UH-60A/LRTA). The UH-60A/LRTA was used in the 40- by 80-Foot Wind Tunnel to complete the full-scale wind tunnel test portion of the NASA / ARMY UH-60A Airloads Program. This report describes the LRTA control system and highlights the differences between the LRTA and UH-60A aircraft. The test hardware, test setup, and test procedures are also described. Sample results are shown, including the azimuthal variation of the measured control system stiffness for three different loadings and two different dynamic actuator settings. Finally, the azimuthal stiffness is converted to fixed system values using multi-blade transformations for input to comprehensive rotorcraft prediction codes.

  11. Nuclear Receptors, RXR, and the Big Bang.

    PubMed

    Evans, Ronald M; Mangelsdorf, David J

    2014-03-27

    Isolation of genes encoding the receptors for steroids, retinoids, vitamin D, and thyroid hormone and their structural and functional analysis revealed an evolutionarily conserved template for nuclear hormone receptors. This discovery sparked identification of numerous genes encoding related proteins, termed orphan receptors. Characterization of these orphan receptors and, in particular, of the retinoid X receptor (RXR) positioned nuclear receptors at the epicenter of the "Big Bang" of molecular endocrinology. This Review provides a personal perspective on nuclear receptors and explores their integrated and coordinated signaling networks that are essential for multicellular life, highlighting the RXR heterodimer and its associated ligands and transcriptional mechanism. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Neural Network Modeling of UH-60A Pilot Vibration

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi

    2003-01-01

    Full-scale flight-test pilot floor vibration is modeled using neural networks and full-scale wind tunnel test data for low speed level flight conditions. Neural network connections between the wind tunnel test data and the tlxee flight test pilot vibration components (vertical, lateral, and longitudinal) are studied. Two full-scale UH-60A Black Hawk databases are used. The first database is the NASMArmy UH-60A Airloads Program flight test database. The second database is the UH-60A rotor-only wind tunnel database that was acquired in the NASA Ames SO- by 120- Foot Wind Tunnel with the Large Rotor Test Apparatus (LRTA). Using neural networks, the flight-test pilot vibration is modeled using the wind tunnel rotating system hub accelerations, and separately, using the hub loads. The results show that the wind tunnel rotating system hub accelerations and the operating parameters can represent the flight test pilot vibration. The six components of the wind tunnel N/rev balance-system hub loads and the operating parameters can also represent the flight test pilot vibration. The present neural network connections can significandy increase the value of wind tunnel testing.

  13. UH1-Y - Benefits and Deficiencies

    DTIC Science & Technology

    2009-02-20

    Report, NA 01 HCG -1 (hereinafter Test and Evaluation Report). 2 author’s experience. 3 Test and Evaluation Report. 4 Test and Evaluation Report...Critical Intelligence. Aug 14, 24(33). - - - 2008. DOD Approves Full Production for UH-1Y Despite Major Deficiency. Oct 2, 24(40). NA 01-11- HCG -2-1...2008. Operational Test and Evaluation Report, NA 01 HCG -1. Parmalee, Patricia, ed. 2005. Test Time. Aviation Week & Space Technology. Jun 20, 162

  14. Adaptive Neuro-Fuzzy Modeling of UH-60A Pilot Vibration

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi; Malki, Heidar A.; Langari, Reza

    2003-01-01

    Adaptive neuro-fuzzy relationships have been developed to model the UH-60A Black Hawk pilot floor vertical vibration. A 200 point database that approximates the entire UH-60A helicopter flight envelope is used for training and testing purposes. The NASA/Army Airloads Program flight test database was the source of the 200 point database. The present study is conducted in two parts. The first part involves level flight conditions and the second part involves the entire (200 point) database including maneuver conditions. The results show that a neuro-fuzzy model can successfully predict the pilot vibration. Also, it is found that the training phase of this neuro-fuzzy model takes only two or three iterations to converge for most cases. Thus, the proposed approach produces a potentially viable model for real-time implementation.

  15. Investigation of Rotor Performance and Loads of a UH-60A Individual Blade Control System

    NASA Technical Reports Server (NTRS)

    Yeo, Hyeonsoo; Romander, Ethan A.; Norman, Thomas R.

    2010-01-01

    A full-scale wind tunnel test was recently conducted (March 2009) in the National Full-Scale Aerodynamics Complex (NFAC) 40- by 80-FootWind Tunnel to evaluate the potential of an individual blade control (IBC) system to improve rotor performance and reduce vibrations, loads, and noise for a UH-60A rotor system [1]. This test was the culmination of a long-termcollaborative effort between NASA, U.S. Army, Sikorsky Aircraft Corporation, and ZF Luftfahrttechnik GmbH (ZFL) to demonstrate the benefits of IBC for a UH-60Arotor. Figure 1 shows the UH-60Arotor and IBC system mounted on the NFAC Large Rotor Test Apparatus (LRTA). The IBC concept used in the current study utilizes actuators placed in the rotating frame, one per blade. In particular, the pitch link of the rotor blade was replacedwith an actuator, so that the blade root pitch can be changed independently. This concept, designed for a full-scale UH-60A rotor, was previously tested in the NFAC 80- by 120-FootWind Tunnel in September 2001 at speeds up to 85 knots [2]. For the current test, the same UH-60A rotor and IBC system were tested in the 40- by 80-FootWind Tunnel at speeds up to 170 knots. Figure 2 shows the servo-hydraulic IBC actuator installed between the swashplate and the blade pitch horn. Although previous wind tunnel experiments [3, 4] and analytical studies on IBC [5, 6] have shown the promise to improve the rotor s performance, in-depth correlation studies have not been performed. Thus, the current test provides a unique resource that can be used to assess the accuracy and reliability of prediction methods and refine theoretical models, with the ultimate goal of providing the technology for timely and cost-effective design and development of new rotors. In this paper, rotor performance and loads calculations are carried out using the analyses CAMRAD II and coupled OVERFLOW-2/CAMRAD II and the results are compared with these UH-60A/IBC wind tunnel test data.

  16. Military Potential Test of the UH-2A Helicopter.

    DTIC Science & Technology

    1963-10-25

    required to fully service two engines during engine change. 3. One quart of hydr aulic fluid , MIL 5606. Used to replace spillage while disconnecting...Maryland , dated 24 January 1963. 7. Report Nr. 1, Final Report, Climatic Laboratory Environ- mental Test of the Model UH- 2A Helicopter , by US

  17. SCAT Classifications of 5 Supernovae with the UH88/SNIFS

    NASA Astrophysics Data System (ADS)

    Tucker, Michael A.; Huber, Mark; Shappee, Benjamin J.; Dong, Subo; Bose, S.; Chen, Ping

    2018-03-01

    We present the first classifications from the newly formed Spectral Classification of Astronomical Transients (SCAT) survey. SCAT is a transient identification survey utilizing the SuperNova Integral Field Spectrograph (SNIFS) on the University of Hawaii (UH) 88-inch telescope.

  18. Simulated Guide Stars: Adapting the Robo-AO Telescope Simulator to UH 88”

    NASA Astrophysics Data System (ADS)

    Ashcraft, Jaren; Baranec, Christoph

    2018-01-01

    Robo-AO is an autonomous adaptive optics system that is in development for the UH 88” Telescope on the Mauna Kea Observatory. This system is capable of achieving near diffraction limited imaging for astronomical telescopes, and has seen successful deployment and use at the Palomar and Kitt Peak Observatories previously. A key component of this system, the telescope simulator, will be adapted from the Palomar Observatory design to fit the UH 88” Telescope. The telescope simulator will simulate the exit pupil of the UH 88” telescope so that the greater Robo-AO system can be calibrated before observing runs. The system was designed in Code V, and then further improved upon in Zemax for later development. Alternate design forms were explored for the potential of adapting the telescope simulator to the NASA Infrared Telescope Facility, where simulating the exit pupil of the telescope proved to be more problematic. A proposed design composed of solely catalog optics was successfully produced for both telescopes, and they await assembly as time comes to construct the new Robo-AO system.

  19. Experimental observations of nonlinearly enhanced 2omega-UH electromagnetic radiation excited by steady-state colliding electron beams

    NASA Technical Reports Server (NTRS)

    Intrator, T.; Hershkowitz, N.; Chan, C.

    1984-01-01

    Counterstreaming large-diameter electron beams in a steady-state laboratory experiment are observed to generate transverse radiation at twice the upper-hybrid frequency (2omega-UH) with a quadrupole radiation pattern. The electromagnetic wave power density is nonlinearly enhanced over the power density obtained from a single beam-plasma system. Electromagnetic power density scales exponentially with beam energy and increases with ion mass. Weak turbulence theory can predict similar (but weaker) beam energy scaling but not the high power density, or the predominance of the 2omega-UH radiation peak over the omega-UH peak. Significant noise near the upper-hybrid and ion plasma frequencies is also measured, with normalized electrostatic wave energy density W(ES)/n(e)T(e) approximately 0.01.

  20. UNBROKEN: UH 1N AIRCREW CONTINUE OPS DESPITE WEAK HEARING PROTECTION

    DTIC Science & Technology

    2016-02-29

    AU/ACSC/2016 AIR COMMAND AND STAFF COLLEGE AIR UNIVERSITY UNBROKEN: UH-1N AIRCREW CONTINUE OPS DESPITE WEAK HEARING... student training. The final one at Fairchild AFB, Washington, is focused on search and rescue operations and supports the Survival, Evasion

  1. Modeling of UH-60A Hub Accelerations with Neural Networks

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi

    2002-01-01

    Neural network relationships between the full-scale, flight test hub accelerations and the corresponding three N/rev pilot floor vibration components (vertical, lateral, and longitudinal) are studied. The present quantitative effort on the UH-60A Black Hawk hub accelerations considers the lateral and longitudinal vibrations. An earlier study had considered the vertical vibration. The NASA/Army UH-60A Airloads Program flight test database is used. A physics based "maneuver-effect-factor (MEF)", derived using the roll-angle and the pitch-rate, is used. Fundamentally, the lateral vibration data show high vibration levels (up to 0.3 g's) at low airspeeds (for example, during landing flares) and at high airspeeds (for example, during turns). The results show that the advance ratio and the gross weight together can predict the vertical and the longitudinal vibration. However, the advance ratio and the gross weight together cannot predict the lateral vibration. The hub accelerations and the advance ratio can be used to satisfactorily predict the vertical, lateral, and longitudinal vibration. The present study shows that neural network based representations of all three UH-60A pilot floor vibration components (vertical, lateral, and longitudinal) can be obtained using the hub accelerations along with the gross weight and the advance ratio. The hub accelerations are clearly a factor in determining the pilot vibration. The present conclusions potentially allow for the identification of neural network relationships between the experimental hub accelerations obtained from wind tunnel testing and the experimental pilot vibration data obtained from flight testing. A successful establishment of the above neural network based link between the wind tunnel hub accelerations and the flight test vibration data can increase the value of wind tunnel testing.

  2. Implementing the UH Asynchronous Learning Network: Practices, Issues and Challenges

    ERIC Educational Resources Information Center

    Odin, Jaishree K.

    2002-01-01

    In spite of ten campuses spread over four islands, access to higher education at the University of Hawai'i (UH) is unevenly distributed across the state. In an effort to address the problem of access, the Alfred P. Sloan Foundation has funded the University of Hawai'i to develop online courses and programs. In this article, the author describes…

  3. Characteristics of polycyclic aromatic hydrocarbon (PAH) emissions from a UH-1H helicopter engine and its impact on the ambient environment

    NASA Astrophysics Data System (ADS)

    Chen, Yu-Cheng; Lee, Wen-Jhy; Uang, Shi-Nian; Lee, Su-Hsing; Tsai, Perng-Jy

    The objective of this study is to characterize the emissions of polycyclic aromatic hydrocarbons (PAHs) from a UH-1H helicopter turboshaft engine and its impact on the ambient environment. Five power settings of the ground idle (GI), fly idle (FI), beed band check (BBC), inlet guide vane (IGV), and take off (TO) were selected and samples were collected from the exhaust by using an isokinetic sampling system. Twenty-two PAH compounds were analyzed by gas chromatograph (GC)/MS. We found the mean total PAH concentration in the exhaust of the UH-1H engine (843 μg m -3) is 1.05-51.7 times in magnitude higher than those of the heavy-duty diesel (HDD) engine, motor vehicle engine, and F101 aircraft engine. Two- and three-ringed PAHs account for 97.5% of total PAH emissions from the UH-1H engine. The mean total PAH and total BaP eq emission factors for the UH-1H engine (63.4 and 0.309 mg L -1·fuel) is 1.65-23.4 and 1.30-7.54 times in magnitude higher than those for the motor vehicle engine, HDD engine, and F101 aircraft engine. The total emission level of the single PAH compound, BaP, for the UH-1H engine (EL BaP) during one landing and take off (LTO) cycle (2.19 mg LTO -1) was higher than the European Commission standard (1.24 mg LTO -1) suggesting that appropriate measures should be taken to reduce PAH emissions from UH-1H engines in the future.

  4. Survivability on the Island of Spice: The Development of the UH-60 Blackhawk and Its Baptism of Fire in Operation Urgent Fury

    DTIC Science & Technology

    2015-06-12

    SURVIVABILITY ON THE ISLAND OF SPICE : THE DEVELOPMENT OF THE UH-60 BLACKHAWK AND ITS BAPTISM OF FIRE IN OPERATION URGENT FURY......THESIS APPROVAL PAGE Name of Candidate: Major Matthew G. Easley Thesis Title: Survivability on the Island of Spice : The Development of the UH

  5. Piloted Evaluation of a UH-60 Mixer Equivalent Turbulence Simulation Model

    NASA Technical Reports Server (NTRS)

    Lusardi, Jeff A.; Blanken, Chris L.; Tischeler, Mark B.

    2002-01-01

    A simulation study of a recently developed hover/low speed Mixer Equivalent Turbulence Simulation (METS) model for the UH-60 Black Hawk helicopter was conducted in the NASA Ames Research Center Vertical Motion Simulator (VMS). The experiment was a continuation of previous work to develop a simple, but validated, turbulence model for hovering rotorcraft. To validate the METS model, two experienced test pilots replicated precision hover tasks that had been conducted in an instrumented UH-60 helicopter in turbulence. Objective simulation data were collected for comparison with flight test data, and subjective data were collected that included handling qualities ratings and pilot comments for increasing levels of turbulence. Analyses of the simulation results show good analytic agreement between the METS model and flight test data, with favorable pilot perception of the simulated turbulence. Precision hover tasks were also repeated using the more complex rotating-frame SORBET (Simulation Of Rotor Blade Element Turbulence) model to generate turbulence. Comparisons of the empirically derived METS model with the theoretical SORBET model show good agreement providing validation of the more complex blade element method of simulating turbulence.

  6. Blade Deflection Measurements of a Full-Scale UH-60A Rotor System

    NASA Technical Reports Server (NTRS)

    Olson, Lawrence E.; Abrego, Anita; Barrows, Danny A.; Burner, Alpheus W.

    2010-01-01

    Blade deflection (BD) measurements using stereo photogrammetry have been made during the individual blade control (IBC) testing of a UH-60A 4-bladed rotor system in the 40 by 80-foot test section of the National Full-Scale Aerodynamic Complex (NFAC). Measurements were made in quadrants one and two, encompassing advance ratios from 0.15 to 0.40, thrust coefficient/solidities from 0.05 to 0.12 and rotor-system drive shaft angles from 0.0 to -9.6 deg. The experiment represents a significant step toward providing benchmark databases to be utilized by theoreticians in the development and validation of rotorcraft prediction techniques. In addition to describing the stereo measurement technique and reporting on preliminary measurements made to date, the intent of this paper is to encourage feedback from the rotorcraft community concerning continued analysis of acquired data and to solicit suggestions for improved test technique and areas of emphasis for measurements in the upcoming UH-60A Airloads test at the NFAC.

  7. NASTRAN Modeling of Flight Test Components for UH-60A Airloads Program Test Configuration

    NASA Technical Reports Server (NTRS)

    Idosor, Florentino R.; Seible, Frieder

    1993-01-01

    Based upon the recommendations of the UH-60A Airloads Program Review Committee, work towards a NASTRAN remodeling effort has been conducted. This effort modeled and added the necessary structural/mass components to the existing UH-60A baseline NASTRAN model to reflect the addition of flight test components currently in place on the UH-60A Airloads Program Test Configuration used in NASA-Ames Research Center's Modern Technology Rotor Airloads Program. These components include necessary flight hardware such as instrument booms, movable ballast cart, equipment mounting racks, etc. Recent modeling revisions have also been included in the analyses to reflect the inclusion of new and updated primary and secondary structural components (i.e., tail rotor shaft service cover, tail rotor pylon) and improvements to the existing finite element mesh (i.e., revisions of material property estimates). Mode frequency and shape results have shown that components such as the Trimmable Ballast System baseplate and its respective payload ballast have caused a significant frequency change in a limited number of modes while only small percent changes in mode frequency are brought about with the addition of the other MTRAP flight components. With the addition of the MTRAP flight components, update of the primary and secondary structural model, and imposition of the final MTRAP weight distribution, modal results are computed representative of the 'best' model presently available.

  8. Helicopter noise definition report: UH-60A, S-76, A-109, 206-L

    DOT National Transportation Integrated Search

    1981-12-31

    This document presents noise data for the Sikorsky UH-60A Blackhawk, the Sikorsky S-76 Spirit, the Agusta A-109 and the Bell 206-L. The acoustical data are accompanied by phototheodolite tracking data, cockpit instrument panel photo data, and meteoro...

  9. Human Factors Assessment of the UH-60M Common Avionics Architecture System (CAAS) Crew Station During the Limited User Evaluation (LEUE)

    DTIC Science & Technology

    2005-12-01

    weapon system evaluation as a high-level architecture and distributed interactive simulation 6 compliant, human-in-the-loop, virtual environment...Directorate to participate in the Limited Early User Evaluation (LEUE) of the Common Avionics Architecture System (CAAS) cockpit. ARL conducted a human...CAAS, the UH-60M PO conducted a limited early user evaluation (LEUE) to evaluate the integration of the CAAS in the UH-60M crew station. The

  10. STANDARD BIG BANG NUCLEOSYNTHESIS UP TO CNO WITH AN IMPROVED EXTENDED NUCLEAR NETWORK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coc, Alain; Goriely, Stephane; Xu, Yi

    Primordial or big bang nucleosynthesis (BBN) is one of the three strong pieces of evidence for the big bang model together with the expansion of the universe and cosmic microwave background radiation. In this study, we improve the standard BBN calculations taking into account new nuclear physics analyses and enlarge the nuclear network up to sodium. This is, in particular, important to evaluate the primitive value of CNO mass fraction that could affect Population III stellar evolution. For the first time we list the complete network of more than 400 reactions with references to the origin of the rates, includingmore » Almost-Equal-To 270 reaction rates calculated using the TALYS code. Together with the cosmological light elements, we calculate the primordial beryllium, boron, carbon, nitrogen, and oxygen nuclei. We performed a sensitivity study to identify the important reactions for CNO, {sup 9}Be, and boron nucleosynthesis. We re-evaluated those important reaction rates using experimental data and/or theoretical evaluations. The results are compared with precedent calculations: a primordial beryllium abundance increase by a factor of four compared to its previous evaluation, but we note a stability for B/H and for the CNO/H abundance ratio that remains close to its previous value of 0.7 Multiplication-Sign 10{sup -15}. On the other hand, the extension of the nuclear network has not changed the {sup 7}Li value, so its abundance is still 3-4 times greater than its observed spectroscopic value.« less

  11. Using Fly-By-Wire Technology in Future Models of the UH-60 and Other Rotary Wing Aircraft

    NASA Technical Reports Server (NTRS)

    Solem, Courtney K.

    2011-01-01

    Several fixed-winged airplanes have successfully used fly-by-wire (FBW) technology for the last 40 years. This technology is now beginning to be incorporated into rotary wing aircraft. By using FBW technology, manufacturers are expecting to improve upon the weight, maintenance time and costs, handling and reliability of the aircraft. Before mass production of this new system begins in new models such as the UH-60MU, testing must be conducted to insure the safety of this technology as well as to reassure others it will be worth the time and money to make such a dramatic change to a perfectly functional machine. The RASCAL JUH-60A has been modified for these purposes. This Black Hawk helicopter has already been equipped with the FBW technology and can be configured as a near perfect representation of the UH-60MU. Because both machines have very similar qualities, the data collected from the RASCAL can be used to make future decisions about the UH-60MU. The U.S. Army AFDD Flight Project Office oversees all the design modifications for every hardware system used in the RASCAL aircraft. This project deals with specific designs and analyses of unique RASCAL aircraft subsystems and their modifications to conduct flight mechanics research.

  12. Constraining nuclear data via cosmological observations: Neutrino energy transport and big bang nucleosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paris, Mark W.; Fuller, George M.; Grohs, Evan Bradley

    Here, we introduce a new computational capability that moves toward a self-consistent calculation of neutrino transport and nuclear reactions for big bang nucleosynthesis (BBN). Such a self-consistent approach is needed to be able to extract detailed information about nuclear reactions and physics beyond the standard model from precision cosmological observations of primordial nuclides and the cosmic microwave background radiation. We also calculate the evolution of the early universe through the epochs of weak decoupling, weak freeze-out and big bang nucleosynthesis (BBN) by simultaneously coupling a full strong, electromagnetic, and weak nuclear reaction network with a multi-energy group Boltzmann neutrino energymore » transport scheme. The modular structure of our approach allows the dissection of the relative contributions of each process responsible for evolving the dynamics of the early universe. Such an approach allows a detailed account of the evolution of the active neutrino energy distribution functions alongside and self-consistently with the nuclear reactions and entropy/heat generation and flow between the neutrino and photon/electron/positron/baryon plasma components. Our calculations reveal nonlinear feedback in the time evolution of neutrino distribution functions and plasma thermodynamic conditions. We discuss the time development of neutrino spectral distortions and concomitant entropy production and extraction from the plasma. These effects result in changes in the computed values of the BBN deuterium and helium-4 yields that are on the order of a half-percent relative to a baseline standard BBN calculation with no neutrino transport. This is an order of magnitude larger effect than in previous estimates. For particular implementations of quantum corrections in plasma thermodynamics, our calculations show a 0.4% increase in deuterium and a 0.6% decrease in 4He over our baseline. The magnitude of these changes are on the order of uncertainties

  13. Constraining nuclear data via cosmological observations: Neutrino energy transport and big bang nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Paris, Mark; Fuller, George; Grohs, Evan; Kishimoto, Chad; Vlasenko, Alexey

    2017-09-01

    We introduce a new computational capability that moves toward a self-consistent calculation of neutrino transport and nuclear reactions for big bang nucleosynthesis (BBN). Such a self-consistent approach is needed to be able to extract detailed information about nuclear reactions and physics beyond the standard model from precision cosmological observations of primordial nuclides and the cosmic microwave background radiation. We calculate the evolution of the early universe through the epochs of weak decoupling, weak freeze-out and big bang nucleosynthesis (BBN) by simultaneously coupling a full strong, electromagnetic, and weak nuclear reaction network with a multi-energy group Boltzmann neutrino energy transport scheme. The modular structure of our approach allows the dissection of the relative contributions of each process responsible for evolving the dynamics of the early universe. Such an approach allows a detailed account of the evolution of the active neutrino energy distribution functions alongside and self-consistently with the nuclear reactions and entropy/heat generation and 'ow between the neutrino and photon/electron/positron/baryon plasma components. Our calculations reveal nonlinear feedback in the time evolution of neutrino distribution functions and plasma thermodynamic conditions. We discuss the time development of neutrino spectral distortions and concomitant entropy production and extraction from the plasma. These e↑ects result in changes in the computed values of the BBN deuterium and helium-4 yields that are on the order of a half-percent relative to a baseline standard BBN calculation with no neutrino transport. This is an order of magnitude larger e↑ect than in previous estimates. For particular implementations of quantum corrections in plasma thermodynamics, our calculations show a 0.4% increase in deuterium and a 0.6% decrease in 4He over our baseline. The magnitude of these changes are on the order of uncertainties in the nuclear

  14. Constraining nuclear data via cosmological observations: Neutrino energy transport and big bang nucleosynthesis

    DOE PAGES

    Paris, Mark W.; Fuller, George M.; Grohs, Evan Bradley; ...

    2017-09-13

    Here, we introduce a new computational capability that moves toward a self-consistent calculation of neutrino transport and nuclear reactions for big bang nucleosynthesis (BBN). Such a self-consistent approach is needed to be able to extract detailed information about nuclear reactions and physics beyond the standard model from precision cosmological observations of primordial nuclides and the cosmic microwave background radiation. We also calculate the evolution of the early universe through the epochs of weak decoupling, weak freeze-out and big bang nucleosynthesis (BBN) by simultaneously coupling a full strong, electromagnetic, and weak nuclear reaction network with a multi-energy group Boltzmann neutrino energymore » transport scheme. The modular structure of our approach allows the dissection of the relative contributions of each process responsible for evolving the dynamics of the early universe. Such an approach allows a detailed account of the evolution of the active neutrino energy distribution functions alongside and self-consistently with the nuclear reactions and entropy/heat generation and flow between the neutrino and photon/electron/positron/baryon plasma components. Our calculations reveal nonlinear feedback in the time evolution of neutrino distribution functions and plasma thermodynamic conditions. We discuss the time development of neutrino spectral distortions and concomitant entropy production and extraction from the plasma. These effects result in changes in the computed values of the BBN deuterium and helium-4 yields that are on the order of a half-percent relative to a baseline standard BBN calculation with no neutrino transport. This is an order of magnitude larger effect than in previous estimates. For particular implementations of quantum corrections in plasma thermodynamics, our calculations show a 0.4% increase in deuterium and a 0.6% decrease in 4He over our baseline. The magnitude of these changes are on the order of uncertainties

  15. Airworthiness and Flight Characteristics Evaluation, UH-60A (Black Hawk) Helicopter

    DTIC Science & Technology

    1981-09-01

    ACTIVITY EDWARDS AIR FORCE BASE, CALIFORNIA 93523 8..30 83 09 0 1 n 04 DISCLoAIMER NOTICE The findings of this report are not to be constrned as an...EDWARDS AIR FORCE BASE, CALIFORNJA 68-0-BH031.-01-68 II. CONTROLLING OFFICE NAME AND ADORESS 1I. REPORT OATS US ARMY AVN RESEARCH & DEVELOPMENT COMMAND...34| conipliance with the applicable paragraphs of the Prime Item Development Specification The UH-60A was tested at Edwards Air Force Base. California

  16. Comparison of NASTRAN analysis with ground vibration results of UH-60A NASA/AEFA test configuration

    NASA Technical Reports Server (NTRS)

    Idosor, Florentino; Seible, Frieder

    1990-01-01

    Preceding program flight tests, a ground vibration test and modal test analysis of a UH-60A Black Hawk helicopter was conducted by Sikorsky Aircraft to complement the UH-60A test plan and NASA/ARMY Modern Technology Rotor Airloads Program. The 'NASA/AEFA' shake test configuration was tested for modal frequencies and shapes and compared with its NASTRAN finite element model counterpart to give correlative results. Based upon previous findings, significant differences in modal data existed and were attributed to assumptions regarding the influence of secondary structure contributions in the preliminary NASTRAN modeling. An analysis of an updated finite element model including several secondary structural additions has confirmed that the inclusion of specific secondary components produces a significant effect on modal frequency and free-response shapes and improves correlations at lower frequencies with shake test data.

  17. Analysis of propulsion system dynamics in the validation of a high-order state space model of the UH-60

    NASA Technical Reports Server (NTRS)

    Kim, Frederick D.

    1992-01-01

    Frequency responses generated from a high-order linear model of the UH-60 Black Hawk have shown that the propulsion system influences significantly the vertical and yaw dynamics of the aircraft at frequencies important to high-bandwidth control law designs. The inclusion of the propulsion system comprises the latest step in the development of a high-order linear model of the UH-60 that models additionally the dynamics of the fuselage, rotor, and inflow. A complete validation study of the linear model is presented in the frequency domain for both on-axis and off-axis coupled responses in the hoverflight condition, and on-axis responses for forward speeds of 80 and 120 knots.

  18. Comparison of Computed and Measured Vortex Evolution for a UH-60A Rotor in Forward Flight

    NASA Technical Reports Server (NTRS)

    Ahmad, Jasim Uddin; Yamauchi, Gloria K.; Kao, David L.

    2013-01-01

    A Computational Fluid Dynamics (CFD) simulation using the Navier-Stokes equations was performed to determine the evolutionary and dynamical characteristics of the vortex flowfield for a highly flexible aeroelastic UH-60A rotor in forward flight. The experimental wake data were acquired using Particle Image Velocimetry (PIV) during a test of the fullscale UH-60A rotor in the National Full-Scale Aerodynamics Complex 40- by 80-Foot Wind Tunnel. The PIV measurements were made in a stationary cross-flow plane at 90 deg rotor azimuth. The CFD simulation was performed using the OVERFLOW CFD solver loosely coupled with the rotorcraft comprehensive code CAMRAD II. Characteristics of vortices captured in the PIV plane from different blades are compared with CFD calculations. The blade airloads were calculated using two different turbulence models. A limited spatial, temporal, and CFD/comprehensive-code coupling sensitivity analysis was performed in order to verify the unsteady helicopter simulations with a moving rotor grid system.

  19. V/STOL AND digital avionics system for UH-1H

    NASA Technical Reports Server (NTRS)

    Liden, S.

    1978-01-01

    A hardware and software system for the Bell UH-1H helicopter was developed that provides sophisticated navigation, guidance, control, display, and data acquisition capabilities for performing terminal area navigation, guidance and control research. Two Sperry 1819B general purpose digital computers were used. One contains the development software that performs all the specified system flight computations. The second computer is available to NASA for experimental programs that run simultaneously with the other computer programs and which may, at the push of a button, replace selected computer computations. Other features that provide research flexibility include keyboard selectable gains and parameters and software generated alphanumeric and CRT displays.

  20. A model structure for identification of linear models of the UH-60 helicopter in hover and forward flight

    DOT National Transportation Integrated Search

    1995-08-01

    A linear model structure applicable to identification of the UH-60 flight : dynamics in hover and forward flight without rotor-state data is developed. The : structure of the model is determined through consideration of the important : dynamic modes ...

  1. Computation of UH-60A Airloads Using CFD/CSD Coupling on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Lee-Rausch, Elizabeth M.

    2011-01-01

    An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids is used to compute the rotor airloads on the UH-60A helicopter at high-speed and high thrust conditions. The flow solver is coupled to a rotorcraft comprehensive code in order to account for trim and aeroelastic deflections. Simulations are performed both with and without the fuselage, and the effects of grid resolution, temporal resolution and turbulence model are examined. Computed airloads are compared to flight data.

  2. Discourse, Power, and Knowledge in the Management of "Big Science": The Production of Consensus in a Nuclear Fusion Research Laboratory.

    ERIC Educational Resources Information Center

    Kinsella, William J.

    1999-01-01

    Extends a Foucauldian view of power/knowledge to the archetypical knowledge-intensive organization, the scientific research laboratory. Describes the discursive production of power/knowledge at the "big science" laboratory conducting nuclear fusion research and illuminates a critical incident in which the fusion research…

  3. UH-1 Helicopter Mechanic (MOS 67N20) Job Description Survey: Performance of Specific Maintenance Tasks.

    ERIC Educational Resources Information Center

    Schulz, Russel E.; And Others

    The report is the second of two describing the results of a world-wide survey of the maintenance activities of UH-1 helicopter mechanics for the purpose of studying the relationships among job requirements, training, and manpower considerations for aviation maintenance. A summary of the results of the first report is included. The survey…

  4. UH-1 Helicopter Mechanic (MOS 67N20) Job Description Survey: Background, Training, and General Maintenance Activities.

    ERIC Educational Resources Information Center

    Schulz, Russel E.; And Others

    The report, the first of two documents examining the relationship among job requirements, training, and manpower considerations for Army aviation maintenance Personnel, discusses the development of task data gathering techniques and procedures for incorporating this data into training programs for the UH-1 helicopter mechanic sPecialty (MOS…

  5. Fidelity assessment of a UH-60A simulation on the NASA Ames vertical motion simulator

    NASA Technical Reports Server (NTRS)

    Atencio, Adolph, Jr.

    1993-01-01

    Helicopter handling qualities research requires that a ground-based simulation be a high-fidelity representation of the actual helicopter, especially over the frequency range of the investigation. This experiment was performed to assess the current capability to simulate the UH-60A Black Hawk helicopter on the Vertical Motion Simulator (VMS) at NASA Ames, to develop a methodology for assessing the fidelity of a simulation, and to find the causes for lack of fidelity. The approach used was to compare the simulation to the flight vehicle for a series of tasks performed in flight and in the simulator. The results show that subjective handling qualities ratings from flight to simulator overlap, and the mathematical model matches the UH-60A helicopter very well over the range of frequencies critical to handling qualities evaluation. Pilot comments, however, indicate a need for improvement in the perceptual fidelity of the simulation in the areas of motion and visual cuing. The methodology used to make the fidelity assessment proved useful in showing differences in pilot work load and strategy, but additional work is needed to refine objective methods for determining causes of lack of fidelity.

  6. BigData and computing challenges in high energy and nuclear physics

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.

    2017-06-01

    In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center ``Kurchatov Institute''

  7. ET-26 hydrochloride (ET-26 HCl) has similar hemodynamic stability to that of etomidate in normal and uncontrolled hemorrhagic shock (UHS) rats.

    PubMed

    Wang, Bin; Chen, Shouming; Yang, Jun; Yang, Linghui; Liu, Jin; Zhang, Wensheng

    2017-01-01

    ET-26 HCl is a promising sedative-hypnotic anesthetic with virtually no effect on adrenocortical steroid synthesis. However, whether or not ET-26 HCl also has a sufficiently wide safety margin and hemodynamic stability similar to that of etomidate and related compounds remains unknown. In this study, the effects of ET-26 HCl, etomidate and propofol on therapeutic index, heart rate (HR), mean arterial pressure (MAP), maximal rate for left ventricular pressure rise (Dmax/t), and maximal rate for left ventricular pressure decline (Dmin/t) were investigated in healthy rats and a rat model of uncontrolled hemorrhagic shock (UHS). 50% effective dose (ED50) and 50% lethal dose (LD50) were determined after single bolus doses of propofol, etomidate, or ET-26 HCl using the Bliss method and the up and down method, respectively. All rats were divided into either the normal group and received either etomidate, ET-26 HCl or propofol, (n = 6 per group) or the UHS group and received either etomidate, ET-26 HCl or propofol, (n = 6 per group). In the normal group, after preparation for hemodynamic and heart-function monitoring, rats were administered a dose of one of the test agents twofold-higher than the established ED50, followed by hemodynamic and heart-function monitoring. Rats in the UHS group underwent experimentally induced UHS with a target arterial pressure of 40 mmHg for 1 hour, followed by administration of an ED50 dose of one of the experimental agents. Blood-gas analysis was conducted on samples obtained during equilibration with the experimental setup and at the end of the experiment. In the normal group, no significant differences in HR, MAP, Dmax/t and Dmin/t (all P > 0.05) were observed at any time point between the etomidate and ET-26 HCl groups, whereas HR, MAP and Dmax/t decreased briefly and Dmin/t increased following propofol administration. In the UHS group, no significant differences in HR, MAP, Dmax/t and Dmin/t were observed before and after administration

  8. High Energy Density Plasmas (HEDP) for studies of basic nuclear science relevant to Stellar and Big Bang Nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Frenje, Johan

    2014-06-01

    Thermonuclear reaction rates and nuclear processes have been explored traditionally by means of conventional accelerator experiments, which are difficult to execute at conditions relevant to stellar nucleosynthesis. Thus, nuclear reactions at stellar energies are often studied through extrapolations from higher-energy data or in low-background underground experiments. Even when measurements are possible using accelerators at relevant energies, thermonuclear reaction rates in stars are inherently different from those in accelerator experiments. The fusing nuclei are surrounded by bound electrons in accelerator experiments, whereas electrons occupy mainly continuum states in a stellar environment. Nuclear astrophysics research will therefore benefit from an enlarged toolkit for studies of nuclear reactions. In this presentation, we report on the first use of High Energy Density Plasmas for studies of nuclear reactions relevant to basic nuclear science, stellar and Big Bang nucleosynthesis. These experiments were carried out at the OMEGA laser facility at University of Rochester and the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory, in which spherical capsules were irradiated with powerful lasers to compress and heat the fuel to high enough temperatures and densities for nuclear reactions to occur. Four experiments will be highlighted in this presentation. In the first experiment, the differential cross section for the elastic neutron-triton (n-T) scattering at 14.1 MeV was measured with significantly higher accuracy than achieved in accelerator experiments. In the second experiment, the T(t,2n)4He reaction, a mirror reaction to the 3He(3He,2p)4He reaction that plays an important role in the proton-proton chain that transforms hydrogen into ordinary 4He in stars like our Sun, was studied at energies in the range 15-40 keV. In the third experiment, the 3He+3He solar fusion reaction was studied directly, and in the fourth experiment, we

  9. Nuclear polarization effects in big bang nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Voronchev, Victor T.; Nakao, Yasuyuki

    2015-10-01

    A standard nuclear reaction network for big bang nucleosynthesis (BBN) simulations operates with spin-averaged nuclear inputs—unpolarized reaction cross sections. At the same time, the major part of reactions controlling the abundances of light elements is spin dependent, i.e., their cross sections depend on the mutual orientation of reacting particle spins. Primordial magnetic fields in the BBN epoch may to a certain degree polarize particles and thereby affect some reactions between them, introducing uncertainties in standard BBN predictions. To clarify the points, we have examined the effects of induced polarization on key BBN reactions—p (n ,γ )d , d (d ,p )t , d (d ,n )He 3 , t (d ,n )α , He 3 (n ,p )t , He 3 (d ,p )α , Li 7 (p ,α )α , Be 7 (n ,p )Li 7 —and the abundances of elements with A ≤7 . It has been obtained that the magnetic field with the strength B0≤1012 G (at the temperature of 109 K ) has almost no effect on the reaction cross sections, and the spin polarization mechanism plays a minor role in the element production, changing the abundances at most by 0.01%. However, if the magnetic field B0 reaches 1015 G its effect on the key reactions appears and becomes appreciable at B0≳1016 G . In particular, it has been found that such a field can increase the p (n ,γ )d cross section (relevant to the starting point of BBN) by a factor of 2 and at the same time almost block the He 3 (n ,p )t reaction responsible for the interconversion of A =3 nuclei in the early Universe. This suggests that the spin polarization effects may become important in nonstandard scenarios of BBN considering the existence of local magnetic bubbles inside which the field can reach ˜1015 G .

  10. Evaluation of Wind Tunnel and Scaling Effects with the UH-60A Airloads Rotor

    DTIC Science & Technology

    2011-05-01

    V! free-stream velocity, ft/s x chordwise distance from leading edge, ft #c, #s corrected/geometric shaft angles, deg $1c, $1s cos/sin components...attached to spindles that were retained by elastomeric bearings to a one-piece titanium hub. These bearings permitted blade flap, lead-lag, and...Figure 3. UH-60A small-scale rotor installed in DNW. Main rotor dampers were installed between each of the main rotor spindles and the hub to

  11. "Uh," "Um," and Autism: Filler Disfluencies as Pragmatic Markers in Adolescents with Optimal Outcomes from Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Irvine, Christina A.; Eigsti, Inge-Marie; Fein, Deborah A.

    2016-01-01

    Filler disfluencies--"uh" and "um"--are thought to serve distinct discourse functions. We examined fillers in spontaneous speech by youth with autism spectrum disorder (ASD), who struggle with pragmatic language, and by youth with ASD who have achieved an "optimal outcome" (OO), as well as in peers with typical…

  12. Accelerated Testing of UH-60 Viscous Bearings for Degraded Grease Fault

    NASA Technical Reports Server (NTRS)

    Dykas, Brian; Hood, Adrian; Krantz, Timothy; Klemmer, Marko

    2015-01-01

    An accelerated aging investigation of critical aviation bearings lubricated with MIL-PRF- 81322 grease was conducted to derive an understanding of the mechanisms of grease degradation and loss of lubrication over time. The current study focuses on UH-60 Black Hawk viscous damper bearings supporting the tail rotor driveshaft, which were subjected to more than 5800 hours of testing in a heated environment to accelerate the deterioration of the grease. The mechanism of grease degradation is a reduction in the oil/thickener ratio rather than the expected chemical degradation of grease constituents. Over the course of testing, vibration and temperature monitoring of bearings was conducted and trends for failing bearings are presented.

  13. Introduction to big bang nucleosynthesis and modern cosmology

    NASA Astrophysics Data System (ADS)

    Mathews, Grant J.; Kusakabe, Motohiko; Kajino, Toshitaka

    Primordial nucleosynthesis remains as one of the pillars of modern cosmology. It is the testing ground upon which many cosmological models must ultimately rest. It is our only probe of the universe during the important radiation-dominated epoch in the first few minutes of cosmic expansion. This paper reviews the basic equations of space-time, cosmology, and big bang nucleosynthesis. We also summarize the current state of observational constraints on primordial abundances along with the key nuclear reactions and their uncertainties. We summarize which nuclear measurements are most crucial during the big bang. We also review various cosmological models and their constraints. In particular, we analyze the constraints that big bang nucleosynthesis places upon the possible time variation of fundamental constants, along with constraints on the nature and origin of dark matter and dark energy, long-lived supersymmetric particles, gravity waves, and the primordial magnetic field.

  14. Application of Neural Networks to Predict UH-60L Electrical Generator Condition using (IMD-HUMS) Data

    DTIC Science & Technology

    2006-12-01

    Data transfer unit ( DTU ) • Remote data concentrator (RDC) • Main processor unit (MPU) • 2 junction boxes (JB1/JB2) • 20 drive train and...NETWORKS TO PREDICT UH-60L ELECTRICAL GENERATOR CONDITION USING (IMD-HUMS) DATA by Evangelos Tourvalis December 2006 Thesis Advisor...including the time for reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the

  15. If You Say Thee Uh You Are Describing Something Hard: The On-Line Attribution of Disfluency during Reference Comprehension

    ERIC Educational Resources Information Center

    Arnold, Jennifer E.; Kam, Carla L. Hudson; Tanenhaus, Michael K.

    2007-01-01

    Eye-tracking and gating experiments examined reference comprehension with fluent (Click on the red. . .) and disfluent (Click on [pause] thee uh red . . .) instructions while listeners viewed displays with 2 familiar (e.g., ice cream cones) and 2 unfamiliar objects (e.g., squiggly shapes). Disfluent instructions made unfamiliar objects more…

  16. A failure effects simulation of a low authority flight control augmentation system on a UH-1H helicopter

    NASA Technical Reports Server (NTRS)

    Corliss, L. D.; Talbot, P. D.

    1977-01-01

    A two-pilot moving base simulator experiment was conducted to assess the effects of servo failures of a flight control system on the transient dynamics of a Bell UH-1H helicopter. The flight control hardware considered was part of the V/STOLAND system built with control authorities of from 20-40%. Servo hardover and oscillatory failures were simulated in each control axis. Measurements were made to determine the adequacy of the failure monitoring system time delay and the servo center and lock time constant, the pilot reaction times, and the altitude and attitude excursions of the helicopter at hover and 60 knots. Safe recoveries were made from all failures under VFR conditions. Pilot reaction times were from 0.5 to 0.75 sec. Reduction of monitor delay times below these values resulted in significantly reduced excursion envelopes. A subsequent flight test was conducted on a UH-1H helicopter with the V/STOLAND system installed. Series servo hardovers were introduced in hover and at 60 knots straight and level. Data from these tests are included for comparison.

  17. Refined scenario of standard Big Bang nucleosynthesis allowing for nonthermal nuclear reactions in the primordial plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voronchev, Victor T.; Nakao, Yasuyuki; Nakamura, Makoto

    The standard scenario of big bang nucleosynthesis (BBN) is generalized to take into account nonthermal nuclear reactions in the primordial plasma. These reactions are naturally triggered in the BBN epoch by fast particles generated in various exoergic processes. It is found that, although such particles can appreciably enhance the rates of some individual reactions, their influence on the whole process of element production is not significant. The nonthermal corrections to element abundances are obtained to be 0.1% ({sup 3}H), -0.03% ({sup 7}Li), and 0.34 %-0.63% (CNO group).

  18. V/STOLAND avionics system flight-test data on a UH-1H helicopter

    NASA Technical Reports Server (NTRS)

    Baker, F. A.; Jaynes, D. N.; Corliss, L. D.; Liden, S.; Merrick, R. B.; Dugan, D. C.

    1980-01-01

    The flight-acceptance test results obtained during the acceptance tests of the V/STOLAND (versatile simplex digital avionics system) digital avionics system on a Bell UH-1H helicopter in 1977 at Ames Research Center are presented. The system provides navigation, guidance, control, and display functions for NASA terminal area VTOL research programs and for the Army handling qualities research programs at Ames Research Center. The acceptance test verified system performance and contractual acceptability. The V/STOLAND hardware navigation, guidance, and control laws resident in the digital computers are described. Typical flight-test data are shown and discussed as documentation of the system performance at acceptance from the contractor.

  19. Big Opportunities and Big Concerns of Big Data in Education

    ERIC Educational Resources Information Center

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  20. Flight Test Identification and Simulation of a UH-60A Helicopter and Slung Load

    NASA Technical Reports Server (NTRS)

    Cicolani, Luigi S.; Sahai, Ranjana; Tucker, George E.; McCoy, Allen H.; Tyson, Peter H.; Tischler, Mark B.; Rosen, Aviv

    2001-01-01

    Helicopter slung-load operations are common in both military and civil contexts. Helicopters and loads are often qualified for these operations by means of flight tests, which can be expensive and time consuming. There is significant potential to reduce such costs both through revisions in flight-test methods and by using validated simulation models. To these ends, flight tests were conducted at Moffett Field to demonstrate the identification of key dynamic parameters during flight tests (aircraft stability margins and handling-qualities parameters, and load pendulum stability), and to accumulate a data base for simulation development and validation. The test aircraft was a UH-60A Black Hawk, and the primary test load was an instrumented 8- by 6- by 6-ft cargo container. Tests were focused on the lateral and longitudinal axes, which are the axes most affected by the load pendulum modes in the frequency range of interest for handling qualities; tests were conducted at airspeeds from hover to 80 knots. Using telemetered data, the dynamic parameters were evaluated in near real time after each test airspeed and before clearing the aircraft to the next test point. These computations were completed in under 1 min. A simulation model was implemented by integrating an advanced model of the UH-60A aerodynamics, dynamic equations for the two-body slung-load system, and load static aerodynamics obtained from wind-tunnel measurements. Comparisons with flight data for the helicopter alone and with a slung load showed good overall agreement for all parameters and test points; however, unmodeled secondary dynamic losses around 2 Hz were found in the helicopter model and they resulted in conservative stability margin estimates.

  1. The Last Big Bang

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGuire, Austin D.; Meade, Roger Allen

    As one of the very few people in the world to give the “go/no go” decision to detonate a nuclear device, Austin “Mac” McGuire holds a very special place in the history of both the Los Alamos National Laboratory and the world. As Commander of Joint Task Force Unit 8.1.1, on Christmas Island in the spring and summer of 1962, Mac directed the Los Alamos data collection efforts for twelve of the last atmospheric nuclear detonations conducted by the United States. Since data collection was at the heart of nuclear weapon testing, it fell to Mac to make the ultimatemore » decision to detonate each test device. He calls his experience THE LAST BIG BANG, since these tests, part of Operation Dominic, were characterized by the dramatic displays of the heat, light, and sounds unique to atmospheric nuclear detonations – never, perhaps, to be witnessed again.« less

  2. Helicopter Noise Definition Report UH-60A, S-76, A-109, 206-L

    DTIC Science & Technology

    1981-12-01

    ALL THE WORLDS AIRCRAF~ 25 -.. H., . . . I - I’I Fi.I 2.3. Sikrsk UH .0A "Bla *khaw -" | r a’ Fig. 2.3.2 Sikoraky S-76, "Spirit" -. q A,~ ~ j A. "I...1305 1:2 -04830 8545 82+9 75*1 O1,6 87,b Sete 11,5 10s5 1#2 -1*0 31 85.7 83,3 75,1 80.0 86#6 67,7 16#5 150 1#1 -0#7 32 83.7 81.2 73.5 79.3 85,6 8669...RIO. TIMITIKE PNLTP WAR REOORDED RAOD TIMiTIME PNLTM WAS RADIATED R/OlAIORAFT RPATE OLI•ND OR DS6ENT O’D-ANGtOL•IM OR DESCENT ANGE.E G*6GROUND SPEED

  3. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    ERIC Educational Resources Information Center

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  4. Blade Displacement Measurements of the Full-Scale UH-60A Airloads Rotor

    NASA Technical Reports Server (NTRS)

    Barrows, Danny A.; Burner, Alpheus W.; Abrego, Anita I.; Olson, Lawrence E.

    2011-01-01

    Blade displacement measurements were acquired during a wind tunnel test of the full-scale UH-60A Airloads rotor. The test was conducted in the 40- by 80-Foot Wind Tunnel of the National Full-Scale Aerodynamics Complex at NASA Ames Research Center. Multi-camera photogrammetry was used to measure the blade displacements of the four-bladed rotor. These measurements encompass a range of test conditions that include advance ratios from 0.15 to unique slowed-rotor simulations as high as 1.0, thrust coefficient to rotor solidity ratios from 0.01 to 0.13, and rotor shaft angles from -10.0 to 8.0 degrees. The objective of these measurements is to provide a benchmark blade displacement database to be utilized in the development and validation of rotorcraft computational tools. The methodology, system development, measurement techniques, and preliminary sample blade displacement measurements are presented.

  5. Uh and um in Children With Autism Spectrum Disorders or Language Impairment

    PubMed Central

    Gorman, Kyle; Olson, Lindsay; Presmanes Hill, Alison; Lunsford, Rebecca; Heeman, Peter A.; van Santen, Jan P. H.

    2016-01-01

    Atypical pragmatic language is often present in individuals with autism spectrum disorders (ASD), along with delays or deficits in structural language. This study investigated the use of the “fillers” uh and um by children ages 4–8 during the autism diagnostic observation schedule. Fillers reflect speakers’ difficulties with planning and delivering speech, but they also serve communicative purposes, such as negotiating control of the floor or conveying uncertainty. We hypothesized that children with ASD would use different patterns of fillers compared to peers with typical development or with specific language impairment (SLI), reflecting differences in social ability and communicative intent. Regression analyses revealed that children in the ASD group were much less likely to use um than children in the other two groups. Filler use is an easy-to-quantify feature of behavior that, in concert with other observations, may help to distinguish ASD from SLI. PMID:26800246

  6. Big bang nucleosynthesis: The strong nuclear force meets the weak anthropic principle

    NASA Astrophysics Data System (ADS)

    MacDonald, J.; Mullan, D. J.

    2009-08-01

    Contrary to a common argument that a small increase in the strength of the strong force would lead to destruction of all hydrogen in the big bang due to binding of the diproton and the dineutron with a catastrophic impact on life as we know it, we show that provided the increase in strong force coupling constant is less than about 50% substantial amounts of hydrogen remain. The reason is that an increase in strong force strength leads to tighter binding of the deuteron, permitting nucleosynthesis to occur earlier in the big bang at higher temperature than in the standard big bang. Photodestruction of the less tightly bound diproton and dineutron delays their production to after the bulk of nucleosynthesis is complete. The decay of the diproton can, however, lead to relatively large abundances of deuterium.

  7. An Examination of Unsteady Airloads on a UH-60A Rotor: Computation Versus Measurement

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Lee-Rausch, Elizabeth

    2012-01-01

    An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids is used to simulate the flow over a UH-60A rotor. Traditionally, the computed pressure and shear stresses are integrated on the computational mesh at selected radial stations and compared to measured airloads. However, the corresponding integration of experimental data uses only the pressure contribution, and the set of integration points (pressure taps) is modest compared to the computational mesh resolution. This paper examines the difference between the traditional integration of computed airloads and an integration consistent with that used for the experimental data. In addition, a comparison of chordwise pressure distributions between computation and measurement is made. Examination of this unsteady pressure data provides new opportunities to understand differences between computation and flight measurement.

  8. Big Data, Big Problems: A Healthcare Perspective.

    PubMed

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  9. Solution structure of leptospiral LigA4 Big domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Song; Zhang, Jiahai; Zhang, Xuecheng

    Pathogenic Leptospiraspecies express immunoglobulin-like proteins which serve as adhesins to bind to the extracellular matrices of host cells. Leptospiral immunoglobulin-like protein A (LigA), a surface exposed protein containing tandem repeats of bacterial immunoglobulin-like (Big) domains, has been proved to be involved in the interaction of pathogenic Leptospira with mammalian host. In this study, the solution structure of the fourth Big domain of LigA (LigA4 Big domain) from Leptospira interrogans was solved by nuclear magnetic resonance (NMR). The structure of LigA4 Big domain displays a similar bacterial immunoglobulin-like fold compared with other Big domains, implying some common structural aspects of Bigmore » domain family. On the other hand, it displays some structural characteristics significantly different from classic Ig-like domain. Furthermore, Stains-all assay and NMR chemical shift perturbation revealed the Ca{sup 2+} binding property of LigA4 Big domain. - Highlights: • Determining the solution structure of a bacterial immunoglobulin-like domain from a surface protein of Leptospira. • The solution structure shows some structural characteristics significantly different from the classic Ig-like domains. • A potential Ca{sup 2+}-binding site was identified by strains-all and NMR chemical shift perturbation.« less

  10. Blade Displacement Predictions for the Full-Scale UH-60A Airloads Rotor

    NASA Technical Reports Server (NTRS)

    Bledron, Robert T.; Lee-Rausch, Elizabeth M.

    2014-01-01

    An unsteady Reynolds-Averaged Navier-Stokes solver for unstructured grids is loosely coupled to a rotorcraft comprehensive code and used to simulate two different test conditions from a wind-tunnel test of a full-scale UH-60A rotor. Performance data and sectional airloads from the simulation are compared with corresponding tunnel data to assess the level of fidelity of the aerodynamic aspects of the simulation. The focus then turns to a comparison of the blade displacements, both rigid (blade root) and elastic. Comparisons of computed root motions are made with data from three independent measurement systems. Finally, comparisons are made between computed elastic bending and elastic twist, and the corresponding measurements obtained from a photogrammetry system. Overall the correlation between computed and measured displacements was good, especially for the root pitch and lag motions and the elastic bending deformation. The correlation of root lead-lag motion and elastic twist deformation was less favorable.

  11. Ground shake test of the UH-60A helicopter airframe and comparison with NASTRAN finite element model predictions

    NASA Technical Reports Server (NTRS)

    Howland, G. R.; Durno, J. A.; Twomey, W. J.

    1990-01-01

    Sikorsky Aircraft, together with the other major helicopter airframe manufacturers, is engaged in a study to improve the use of finite element analysis to predict the dynamic behavior of helicopter airframes, under a rotorcraft structural dynamics program called DAMVIBS (Design Analysis Methods for VIBrationS), sponsored by the NASA-Langley. The test plan and test results are presented for a shake test of the UH-60A BLACK HAWK helicopter. A comparison is also presented of test results with results obtained from analysis using a NASTRAN finite element model.

  12. Detached Eddy Simulation of the UH-60 Rotor Wake Using Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.; Ahmad, Jasim U.

    2012-01-01

    Time-dependent Navier-Stokes flow simulations have been carried out for a UH-60 rotor with simplified hub in forward flight and hover flight conditions. Flexible rotor blades and flight trim conditions are modeled and established by loosely coupling the OVERFLOW Computational Fluid Dynamics (CFD) code with the CAMRAD II helicopter comprehensive code. High order spatial differences, Adaptive Mesh Refinement (AMR), and Detached Eddy Simulation (DES) are used to obtain highly resolved vortex wakes, where the largest turbulent structures are captured. Special attention is directed towards ensuring the dual time accuracy is within the asymptotic range, and verifying the loose coupling convergence process using AMR. The AMR/DES simulation produced vortical worms for forward flight and hover conditions, similar to previous results obtained for the TRAM rotor in hover. AMR proved to be an efficient means to capture a rotor wake without a priori knowledge of the wake shape.

  13. Inclusions and Substructures in Uranium of Nuclear Purity. Report No. 51; INCLUSIONES Y SUBESTRUCTURAS EN URANIO DE PUREZA NUCLEAR. INFORME NO. 51

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biloni, H.; Lindenvald, N.; Sabato, J.A.

    1961-01-01

    The inclusions in uranium of nuclear purity (UC, UH/sub 3/, UO/sub 2/, UO, UN, and the complexes which include the intersolubility of U with C and N or with C, N, and O) were . analyzed metallographically, and the results reported by other authors were discussed critically. The existence of the fine precipitate reticular substructure, sensitive to thermal treatments, which generally appears in uraniunn was analyzed. Its origins were discussed in accordance with bibliographic data. Complementary data for its comprehension are given from the metallographic analysis of U--Al and U-- Fe alloys with low Al and Fe concentrations. (tr-auth)

  14. Exploring Volcanism with Digital Technology in Undergraduate Education

    NASA Astrophysics Data System (ADS)

    McCoy, F. W.; Parisky, A.

    2016-12-01

    Volcanism as one of the most dynamic geological processes on this planet is also one of the most dramatic for attracting students to the earth sciences. At the University of Hawaii (UH) volcanism is used to attract students into the geosciences, coupled with its significant association to Hawaiian culture and contemporary issues such as those associated with related hazards - example: during the past century five towns were buried by lava flows on the Big Island, another recently threatened with destruction. To bring this dynamism into undergraduate education, UH focuses on field trips and courses to all islands; at Windward Community College (WCC/UH) a focus is provided through a series of field courses (1 credit) to all islands, especially the Big Island. Critical to the WCC effort are computer-generated animations and descriptions of volcanological processes for illustrating concepts undergraduate students find difficult: tumescence as an indicator of an eruption, fractional crystallization, collapse of volcanic edifices, explosive eruptions, weathering processes, hazards and mitigation, all embedded in the evolutionary story of mid-ocean volcanic islands such as those in Hawaii. Field courses require intense field labs, which are significantly assisted by digital platforms that include computer-generated illustrations, descriptions, animations, and more. The consequence for developing geoscientists has been outstanding.

  15. How Big Is Too Big?

    ERIC Educational Resources Information Center

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  16. BigDog

    NASA Astrophysics Data System (ADS)

    Playter, R.; Buehler, M.; Raibert, M.

    2006-05-01

    BigDog's goal is to be the world's most advanced quadruped robot for outdoor applications. BigDog is aimed at the mission of a mechanical mule - a category with few competitors to date: power autonomous quadrupeds capable of carrying significant payloads, operating outdoors, with static and dynamic mobility, and fully integrated sensing. BigDog is about 1 m tall, 1 m long and 0.3 m wide, and weighs about 90 kg. BigDog has demonstrated walking and trotting gaits, as well as standing up and sitting down. Since its creation in the fall of 2004, BigDog has logged tens of hours of walking, climbing and running time. It has walked up and down 25 & 35 degree inclines and trotted at speeds up to 1.8 m/s. BigDog has walked at 0.7 m/s over loose rock beds and carried over 50 kg of payload. We are currently working to expand BigDog's rough terrain mobility through the creation of robust locomotion strategies and terrain sensing capabilities.

  17. Flight-Time Identification of a UH-60A Helicopter and Slung Load

    NASA Technical Reports Server (NTRS)

    Cicolani, Luigi S.; McCoy, Allen H.; Tischler, Mark B.; Tucker, George E.; Gatenio, Pinhas; Marmar, Dani

    1998-01-01

    This paper describes a flight test demonstration of a system for identification of the stability and handling qualities parameters of a helicopter-slung load configuration simultaneously with flight testing, and the results obtained.Tests were conducted with a UH-60A Black Hawk at speeds from hover to 80 kts. The principal test load was an instrumented 8 x 6 x 6 ft cargo container. The identification used frequency domain analysis in the frequency range to 2 Hz, and focussed on the longitudinal and lateral control axes since these are the axes most affected by the load pendulum modes in the frequency range of interest for handling qualities. Results were computed for stability margins, handling qualities parameters and load pendulum stability. The computations took an average of 4 minutes before clearing the aircraft to the next test point. Important reductions in handling qualities were computed in some cases, depending, on control axis and load-slung combination. A database, including load dynamics measurements, was accumulated for subsequent simulation development and validation.

  18. UH cosmic rays and solar system material - The elements just beyond iron

    NASA Technical Reports Server (NTRS)

    Wefel, J. P.; Schramm, D. N.; Blake, J. B.

    1977-01-01

    The nucleosynthesis of cosmic-ray elements between the iron peak and the rare-earth region is examined, and compositional changes introduced by propagation in interstellar space are calculated. Theories on the origin of elements heavier than iron are reviewed, a supernova model of explosive nucleosynthesis is adopted for the ultraheavy (UH) cosmic rays, and computational results for different source distributions are compared with experimental data. It is shown that both the cosmic-ray data and the nucleosynthesis calculations are not yet of sufficient precision to pinpoint the processes occurring in cosmic-ray source regions, that the available data do provide boundary conditions for cosmic-ray nucleosynthesis, and that these limits may apply to the origin of elements in the solar system. Specifically, it is concluded that solar-system abundances appear to be consistent with a superposition of the massive-star core-helium-burning s-process plus explosive-carbon-burning synthesis for the elements from Cu to As and are explained adequately by the s- and r-processes for heavier elements.

  19. Big Bang 6Li nucleosynthesis studied deep underground (LUNA collaboration)

    NASA Astrophysics Data System (ADS)

    Trezzi, D.; Anders, M.; Aliotta, M.; Bellini, A.; Bemmerer, D.; Boeltzig, A.; Broggini, C.; Bruno, C. G.; Caciolli, A.; Cavanna, F.; Corvisiero, P.; Costantini, H.; Davinson, T.; Depalo, R.; Elekes, Z.; Erhard, M.; Ferraro, F.; Formicola, A.; Fülop, Zs.; Gervino, G.; Guglielmetti, A.; Gustavino, C.; Gyürky, Gy.; Junker, M.; Lemut, A.; Marta, M.; Mazzocchi, C.; Menegazzo, R.; Mossa, V.; Pantaleo, F.; Prati, P.; Rossi Alvarez, C.; Scott, D. A.; Somorjai, E.; Straniero, O.; Szücs, T.; Takacs, M.

    2017-03-01

    The correct prediction of the abundances of the light nuclides produced during the epoch of Big Bang Nucleosynthesis (BBN) is one of the main topics of modern cosmology. For many of the nuclear reactions that are relevant for this epoch, direct experimental cross section data are available, ushering the so-called "age of precision". The present work addresses an exception to this current status: the 2H(α,γ)6Li reaction that controls 6Li production in the Big Bang. Recent controversial observations of 6Li in metal-poor stars have heightened the interest in understanding primordial 6Li production. If confirmed, these observations would lead to a second cosmological lithium problem, in addition to the well-known 7Li problem. In the present work, the direct experimental cross section data on 2H(α,γ)6Li in the BBN energy range are reported. The measurement has been performed deep underground at the LUNA (Laboratory for Underground Nuclear Astrophysics) 400 kV accelerator in the Laboratori Nazionali del Gran Sasso, Italy. The cross section has been directly measured at the energies of interest for Big Bang Nucleosynthesis for the first time, at Ecm = 80, 93, 120, and 133 keV. Based on the new data, the 2H(α,γ)6Li thermonuclear reaction rate has been derived. Our rate is even lower than previously reported, thus increasing the discrepancy between predicted Big Bang 6Li abundance and the amount of primordial 6Li inferred from observations.

  20. Nursing Needs Big Data and Big Data Needs Nursing.

    PubMed

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  1. Army Aircraft Safety Performance Review, FY 87-FY 91. UH-60, OH-58D, AH-64, MH/CH-47D

    DTIC Science & Technology

    1992-12-01

    is $10,000 or more, but less than $200,000; a nonfatal injury that causes any loss of time from work beyond the day or shift on which it occurred; or...a nonfatal illness or disability that causes loss of time from work or disability at any time (lost-time case). Class D accident The resulting...flex horoscope so that a complete inspection of the drive shaft can be made. Wire strike While on approach to land at the scene of a UH-60 wire

  2. Effects of Microclimate Cooling on Physiology and Performance While Flying the UH-60 Helicopter Simulator in NBC Conditions in a Controlled Heat Environment

    DTIC Science & Technology

    1992-08-01

    including instrumenting and dressing the subjects, monitoring the physiological parameters in the simulator, and collecting and processing data. They...also was decided to extend the recruiting process to include all helicopter aviators, even if not UH-60 qualified. There is little in the flight profile...parameter channels, and the data were processed to produce a single root mean square (RMS) error value for each channel appropriate to each of the 9

  3. Navier-Stokes Simulation of UH-60A Rotor/Wake Interaction Using Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2017-01-01

    Time-dependent Navier-Stokes simulations have been carried out for a flexible UH-60A rotor in forward flight, where the rotor wake interacts with the rotor blades. These flow conditions involved blade vortex interaction and dynamic stall, two common conditions that occur as modern helicopter designs strive to achieve greater flight speeds and payload capacity. These numerical simulations utilized high-order spatial accuracy and delayed detached eddy simulation. Emphasis was placed on understanding how improved rotor wake resolution affects the prediction of the normal force, pitching moment, and chord force of the rotor. Adaptive mesh refinement was used to highly resolve the turbulent rotor wake in a computationally efficient manner. Moreover, blade vortex interaction was found to trigger dynamic stall. Time-dependent flow visualization was utilized to provide an improved understanding of the numerical and physical mechanisms involved with three-dimensional dynamic stall.

  4. Electron screening and its effects on big-bang nucleosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Biao; Bertulani, C. A.; Balantekin, A. B.

    We study the effects of electron screening on nuclear reaction rates occurring during the big-bang nucleosynthesis epoch. The sensitivity of the predicted elemental abundances on electron screening is studied in detail. It is shown that electron screening does not produce noticeable results in the abundances unless the traditional Debye-Hueckel model for the treatment of electron screening in stellar environments is enhanced by several orders of magnitude. This work rules out electron screening as a relevant ingredient to big-bang nucleosynthesis, confirming a previous study [see Itoh et al., Astrophys. J. 488, 507 (1997)] and ruling out exotic possibilities for the treatmentmore » of screening beyond the mean-field theoretical approach.« less

  5. Navier-Stokes Simulation of UH-60A Rotor/Wake Interaction Using Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2017-01-01

    High-resolution simulations of rotor/vortex-wake interaction for a UH60-A rotor under BVI and dynamic stallconditions were carried out with the OVERFLOW Navier-Stokes code.a. The normal force and pitching moment variation with azimuth angle were in good overall agreementwith flight-test data, similar to other CFD results reported in the literature.b. The wake-grid resolution did not have a significant effect on the rotor-blade airloads. This surprisingresult indicates that a wake grid spacing of (Delta)S=10% ctip is sufficient for engineering airloads predictionfor hover and forward flight. This assumes high-resolution body grids, high-order spatial accuracy, anda hybrid RANS/DDES turbulence model.c. Three-dimensional dynamic stall was found to occur due the presence of blade-tip vortices passing overa rotor blade on the retreating side. This changed the local airfoil angle of attack, causing stall, unlikethe 2D perspective of pure pitch oscillation of the local airfoil section.

  6. Testing of UH-60A helicopter transmission in NASA Lewis 2240-kW (3000-hp) facility

    NASA Technical Reports Server (NTRS)

    Mitchell, A. M.; Oswald, F. B.; Coe, H. H.

    1986-01-01

    The U.S. Army's UH-60A Black Hawk 2240-kW (3000-hp) class, twin-engine helicopter transmission was tested at the NASA Lewis Research Center. The vibration and efficiency test results will be used to enhance the data base for similar-class helicopters. Most of the data were obtained for a matrix of test conditions of 50 to 100 percent of rated rotor speed and 20 to 100 percent of rated input power. The transmission's mechanical efficiency at 100 percent of rated power was 97.3 and 97.5 percent with its inlet oil maintained at 355 and 372 K (180 and 210 F), respectively. The highest vibration reading was 72 g's rms at the upper housing side wall. Other vibration levels measured near the gear meshes are reported.

  7. BigBWA: approaching the Burrows-Wheeler aligner to Big Data technologies.

    PubMed

    Abuín, José M; Pichel, Juan C; Pena, Tomás F; Amigo, Jorge

    2015-12-15

    BigBWA is a new tool that uses the Big Data technology Hadoop to boost the performance of the Burrows-Wheeler aligner (BWA). Important reductions in the execution times were observed when using this tool. In addition, BigBWA is fault tolerant and it does not require any modification of the original BWA source code. BigBWA is available at the project GitHub repository: https://github.com/citiususc/BigBWA. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Redesign and Rehost of the BIG STICK Nuclear Wargame Simulation

    DTIC Science & Technology

    1988-12-01

    described by Pressman [16]. The 4GT soft- ware development approach consists of four iterative phases: the requirements gathering phase, the design strategy...2. BIG STICK Instructions and Planning Guidance. Air Command and Staff College, Air University, Maxwell AFB AL, 1987. Unpublished Manual. 3. Barry W...Software Engineering Notes, 7:29-32, April 1982. 81 17. Roger S. Pressman . Software Engineering: A Practitioner’s Approach. Mc-Craw-llill Book

  9. Benchmarking Big Data Systems and the BigData Top100 List.

    PubMed

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  10. Big data, big knowledge: big data for personalized healthcare.

    PubMed

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  11. Gear tooth stress measurements on the UH-60A helicopter transmission

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    1987-01-01

    The U.S. Army UH-60A (Black Hawk) 2200-kW (3000-hp) class twin-engine helicopter transmission was tested at the NASA Lewis Research Center. Results from these experimental (strain-gage) stress tests will enhance the data base for gear stress levels in transmissions of a similar power level. Strain-gage measurements were performed on the transmission's spiral-bevel combining pinions, the planetary Sun gear, and ring gear. Tests were performed at rated speed and at torque levels 25 to 100 percent that of rated. One measurement series was also taken at a 90 percent speed level. The largest stress found was 760 MPa (110 ksi) on the combining pinion fillet. This is 230 percent greater than the AGMA index stress. Corresponding mean and alternating stresses were 300 and 430 MPa (48 and 62 ksi). These values are within the range of successful test experience reported for other transmissions. On the fillet of the ring gear, the largest stress found was 410 MPa (59 ksi). The ring-gear peak stress was found to be 11 percent less than an analytical (computer simulation) value and it is 24 percent greater than the AGMA index stress. A peak compressive stress of 650 MPa (94 ksi) was found at the center of the Sun gear tooth root.

  12. Nuclear Symbolism and Ritual--Upholding the National Myth: A Study of Indian and Pakistani Nuclear Proliferation

    DTIC Science & Technology

    2017-06-01

    be abolished as a means of settling any problem. Indian President Rajendra Prasad, February 21, 1955 We have a big bomb now Indian Prime...Nuclear Bomb : The Impact on Global Proliferation, Updated ed (Berkeley London: University of California Press, 1999). 33. 4 Perkovich, India’s Nuclear... Bomb . 292. 5 Perkovich, India’s Nuclear Bomb . 409. 6 Perkovich, India’s Nuclear Bomb . 6. 4 weapons were considered rogue. Thus, India’s refusal

  13. Big data uncertainties.

    PubMed

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  14. Installation of C-6533(XE-2)/ARC ICS in UH-1H helicopter

    NASA Astrophysics Data System (ADS)

    Hnat, J. A.

    1980-07-01

    This report documents the results of the installation of the C-6533(XE-2)/ARC ICS in UH-1H helicopter. Installation was performed at the AEL, Inc., Monmouth County Airport facility. Design of each installation was coordinated and approved by the Government. The mechanical and electrical installation drawings for the helicopter are attached as Appendix A of this report. The new ICS system consisted of new cabling, new intercoms and helmets rewired with new microphones. All four crew stations of the helicopter were reconfigured with the new system. Existing cabling for the standard ICS system remained in the aircraft but was securely stowed for later restoration of the aircraft. The helmets (4) were rewired using separate jacks for headphones and microphone lines. Transmit and receive cables were installed in the aircraft with a minimum separation of one inch between cables. A junction box was fabricated and installed on the aft end of the console to house the fan-out terminal strips. Transmit and receive lines' separation was maintained in the junction box. During the test phase the onboard radios were used with the new ICS system.

  15. Five Big Ideas

    ERIC Educational Resources Information Center

    Morgan, Debbie

    2012-01-01

    Designing quality continuing professional development (CPD) for those teaching mathematics in primary schools is a challenge. If the CPD is to be built on the scaffold of five big ideas in mathematics, what might be these five big ideas? Might it just be a case of, if you tell me your five big ideas, then I'll tell you mine? Here, there is…

  16. Experimental Investigation and Fundamental Understanding of a Slowed UH-60A Rotor at High Advance Ratios

    NASA Technical Reports Server (NTRS)

    Datta, Anubhav; Yeo, Hyeonsoo; Norman, Thomas R.

    2011-01-01

    This paper describes and analyzes the measurements from a full-scale, slowed RPM, UH-60A rotor tested at the National Full-Scale Aerodynamics Complex 40- by 80- ft wind tunnel up to an advance ratio of 1.0. A comprehensive set of measurements, that includes performance, blade loads, hub loads and pressures/airloads makes this data set unique. The measurements reveal new and rich aeromechanical phenomena that are special to this exotic regime. These include reverse chord dynamic stall, retreating side impulse in pitch-link load, large inboard-outboard elastic twist differential, supersonic flow at low subsonic advancing tip Mach numbers, diminishing rotor forces yet dramatic build up of blade loads, and dramatic blade loads yet benign levels of vibratory hub loads. The objective of this research is the fundamental understanding of these unique aeromechanical phenomena. The intent is to provide useful knowledge for the design of high speed, high efficiency, slowed RPM rotors of the future and a challenging database for advanced analyses validation.

  17. A NASTRAN investigation of simulated projectile damage effects on a UH-1B tail boom model

    NASA Technical Reports Server (NTRS)

    Futterer, A. T.

    1980-01-01

    A NASTRAN model of a UH-1B tail boom that had been designed for another project was used to investigate the effect on structural integrity of simulated projectile damage. Elements representing skin, and sections of stringers, longerons and bulkheads were systematically deleted to represent projectile damage. The structure was loaded in a manner to represent the flight loads that would be imposed on the tail boom at a 130 knot cruise. The deflection of four points on the rear of the tail boom relative to the position of these points for the unloaded, undamaged condition of the tail boom was used as a measure of the loss of structural rigidity. The same procedure was then used with the material properties of the aluminum alloys replaced with the material properties of T300/5208 high strength graphite/epoxy fibrous composite material, (0, + or - 45, 90)s for the skin and (0, + or - 45)s for the longerons, stringers, and bulk heads.

  18. Native Perennial Forb Variation Between Mountain Big Sagebrush and Wyoming Big Sagebrush Plant Communities

    NASA Astrophysics Data System (ADS)

    Davies, Kirk W.; Bates, Jon D.

    2010-09-01

    Big sagebrush ( Artemisia tridentata Nutt.) occupies large portions of the western United States and provides valuable wildlife habitat. However, information is lacking quantifying differences in native perennial forb characteristics between mountain big sagebrush [ A. tridentata spp. vaseyana (Rydb.) Beetle] and Wyoming big sagebrush [ A. tridentata spp. wyomingensis (Beetle & A. Young) S.L. Welsh] plant communities. This information is critical to accurately evaluate the quality of habitat and forage that these communities can produce because many wildlife species consume large quantities of native perennial forbs and depend on them for hiding cover. To compare native perennial forb characteristics on sites dominated by these two subspecies of big sagebrush, we sampled 106 intact big sagebrush plant communities. Mountain big sagebrush plant communities produced almost 4.5-fold more native perennial forb biomass and had greater native perennial forb species richness and diversity compared to Wyoming big sagebrush plant communities ( P < 0.001). Nonmetric multidimensional scaling (NMS) and the multiple-response permutation procedure (MRPP) demonstrated that native perennial forb composition varied between these plant communities ( P < 0.001). Native perennial forb composition was more similar within plant communities grouped by big sagebrush subspecies than expected by chance ( A = 0.112) and composition varied between community groups ( P < 0.001). Indicator analysis did not identify any perennial forbs that were completely exclusive and faithful, but did identify several perennial forbs that were relatively good indicators of either mountain big sagebrush or Wyoming big sagebrush plant communities. Our results suggest that management plans and habitat guidelines should recognize differences in native perennial forb characteristics between mountain and Wyoming big sagebrush plant communities.

  19. The mechanism of a nuclear pore assembly: a molecular biophysics view.

    PubMed

    Kuvichkin, Vasily V

    2011-06-01

    The basic problem of nuclear pore assembly is the big perinuclear space that must be overcome for nuclear membrane fusion and pore creation. Our investigations of ternary complexes: DNA-PC liposomes-Mg²⁺, and modern conceptions of nuclear pore structure allowed us to introduce a new mechanism of nuclear pore assembly. DNA-induced fusion of liposomes (membrane vesicles) with a single-lipid bilayer or two closely located nuclear membranes is considered. After such fusion on the lipid bilayer surface, traces of a complex of ssDNA with lipids were revealed. At fusion of two identical small liposomes (membrane vesicles) < 100 nm in diameter, a "big" liposome (vesicle) with ssDNA on the vesicle equator is formed. ssDNA occurrence on liposome surface gives a biphasic character to the fusion kinetics. The "big" membrane vesicle surrounded by ssDNA is the base of nuclear pore assembly. Its contact with the nuclear envelope leads to fast fusion of half of the vesicles with one nuclear membrane; then ensues a fusion delay when ssDNA reaches the membrane. The next step is to turn inside out the second vesicle half and its fusion to other nuclear membrane. A hole is formed between the two membranes, and nucleoporins begin pore complex assembly around the ssDNA. The surface tension of vesicles and nuclear membranes along with the kinetic energy of a liquid inside a vesicle play the main roles in this process. Special cases of nuclear pore formation are considered: pore formation on both nuclear envelope sides, the difference of pores formed in various cell-cycle phases and linear nuclear pore clusters.

  20. A Model-based Health Monitoring and Diagnostic System for the UH-60 Helicopter. Appendix D

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, Ann; Hindson, William; Sanderfer, Dwight; Deb, Somnath; Domagala, Chuck

    2001-01-01

    Model-based reasoning techniques hold much promise in providing comprehensive monitoring and diagnostics capabilities for complex systems. We are exploring the use of one of these techniques, which utilizes multi-signal modeling and the TEAMS-RT real-time diagnostic engine, on the UH-60 Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) flight research aircraft. We focus on the engine and transmission systems, and acquire sensor data across the 1553 bus as well as by direct analog-to-digital conversion from sensors to the QHuMS (Qualtech health and usage monitoring system) computer. The QHuMS computer uses commercially available components and is rack-mounted in the RASCAL facility. A multi-signal model of the transmission and engine subsystems enables studies of system testability and analysis of the degree of fault isolation available with various instrumentation suites. The model and examples of these analyses will be described and the data architectures enumerated. Flight tests of this system will validate the data architecture and provide real-time flight profiles to be further analyzed in the laboratory.

  1. Loads Correlation of a Full-Scale UH-60A Airloads Rotor in a Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Yeo, Hyeonsoo; Romander, Ethan A.

    2012-01-01

    Wind tunnel measurements of the rotor trim, blade airloads, and structural loads of a full-scale UH-60A Black Hawk main rotor are compared with calculations obtained using the comprehensive rotorcraft analysis CAMRAD II and a coupled CAMRAD II/OVERFLOW 2 analysis. A speed sweep at constant lift up to an advance ratio of 0.4 and a thrust sweep at constant speed into deep stall are investigated. The coupled analysis shows significant improvement over comprehensive analysis. Normal force phase is better captured and pitching moment magnitudes are better predicted including the magnitude and phase of the two stall events in the fourth quadrant at the deeply stalled condition. Structural loads are, in general, improved with the coupled analysis, but the magnitude of chord bending moment is still significantly underpredicted. As there are three modes around 4 and 5/rev frequencies, the structural responses to the 5/rev airloads due to dynamic stall are magnified and thus care must be taken in the analysis of the deeply stalled condition.

  2. Big Data and Neuroimaging.

    PubMed

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  3. Population connectivity of endangered Ozark big-eared bats (Corynorhinus townsendii ingens)

    USGS Publications Warehouse

    Lee, Dana N.; Stark, Richard C.; Puckette, William L.; Hamilton, Meredith J.; Leslie, David M.; Van Den Bussche, Ronald A.

    2015-01-01

    The endangered Ozark big-eared bat (Corynorhinus townsendii ingens) is restricted to eastern Oklahoma and western and north-central Arkansas, where populations may be susceptible to losses of genetic variation due to patchy distribution of colonies and potentially small effective population sizes. We used mitochondrial D-loop DNA sequences and 15 nuclear microsatellite loci to determine population connectivity among Ozark big-eared bat caves. Assessment of 7 caves revealed a haplotype not detected in a previous study (2002–2003) and gene flow among colonies in eastern Oklahoma. Our data suggest genetic mixing of individuals, which may be occurring at nearby swarming sites in the autumn. Further evidence of limited gene flow between caves in Oklahoma with a cave in Arkansas highlights the importance of including samples from geographically widespread caves to fully understand gene flow in this subspecies. It appears autumn swarming sites and winter hibernacula play an important role in providing opportunities for mating; therefore, we suggest protection of these sites, maternity caves, and surrounding habitat to facilitate gene flow among populations of Ozark big-eared bats.

  4. Summary of Full-Scale Blade Displacement Measurements of the UH- 60A Airloads Rotor

    NASA Technical Reports Server (NTRS)

    Abrego, Anita I.; Meyn, Larry; Burner, Alpheus W.; Barrows, Danny A.

    2016-01-01

    Blade displacement measurements using multi-camera photogrammetry techniques were acquired for a full-scale UH-60A rotor, tested in the National Full-Scale Aerodynamic Complex 40-Foot by 80-Foot Wind Tunnel. The measurements, acquired over the full rotor azimuth, encompass a range of test conditions that include advance ratios from 0.15 to 1.0, thrust coefficient to rotor solidity ratios from 0.01 to 0.13, and rotor shaft angles from -10.0 to 8.0 degrees. The objective was to measure the blade displacements and deformations of the four rotor blades and provide a benchmark blade displacement database to be utilized in the development and validation of rotorcraft prediction techniques. An overview of the blade displacement measurement methodology, system development, and data analysis techniques are presented. Sample results based on the final set of camera calibrations, data reduction procedures and estimated corrections that account for registration errors due to blade elasticity are shown. Differences in blade root pitch, flap and lag between the previously reported results and the current results are small. However, even small changes in estimated root flap and pitch can lead to significant differences in the blade elasticity values.

  5. Cryptography for Big Data Security

    DTIC Science & Technology

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  6. Data: Big and Small.

    PubMed

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  7. Big Data in industry

    NASA Astrophysics Data System (ADS)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  8. Fixing the Big Bang Theory's Lithium Problem

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2017-02-01

    How did our universe come into being? The Big Bang theory is a widely accepted and highly successful cosmological model of the universe, but it does introduce one puzzle: the cosmological lithium problem. Have scientists now found a solution?Too Much LithiumIn the Big Bang theory, the universe expanded rapidly from a very high-density and high-temperature state dominated by radiation. This theory has been validated again and again: the discovery of the cosmic microwave background radiation and observations of the large-scale structure of the universe both beautifully support the Big Bang theory, for instance. But one pesky trouble-spot remains: the abundance of lithium.The arrows show the primary reactions involved in Big Bang nucleosynthesis, and their flux ratios, as predicted by the authors model, are given on the right. Synthesizing primordial elements is complicated! [Hou et al. 2017]According to Big Bang nucleosynthesis theory, primordial nucleosynthesis ran wild during the first half hour of the universes existence. This produced most of the universes helium and small amounts of other light nuclides, including deuterium and lithium.But while predictions match the observed primordial deuterium and helium abundances, Big Bang nucleosynthesis theory overpredicts the abundance of primordial lithium by about a factor of three. This inconsistency is known as the cosmological lithium problem and attempts to resolve it using conventional astrophysics and nuclear physics over the past few decades have not been successful.In a recent publicationled by Suqing Hou (Institute of Modern Physics, Chinese Academy of Sciences) and advisorJianjun He (Institute of Modern Physics National Astronomical Observatories, Chinese Academy of Sciences), however, a team of scientists has proposed an elegant solution to this problem.Time and temperature evolution of the abundances of primordial light elements during the beginning of the universe. The authors model (dotted lines

  9. The big data-big model (BDBM) challenges in ecological research

    NASA Astrophysics Data System (ADS)

    Luo, Y.

    2015-12-01

    The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple

  10. Big Machines and Big Science: 80 Years of Accelerators at Stanford

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loew, Gregory

    2008-12-16

    Longtime SLAC physicist Greg Loew will present a trip through SLAC's origins, highlighting its scientific achievements, and provide a glimpse of the lab's future in 'Big Machines and Big Science: 80 Years of Accelerators at Stanford.'

  11. Helicopter noise measurements data report. volume II. helicopter models: Bell 212 (UH-1N), Sikorsky S-61 (SH-3A), Sikorsky S-64 'Skycrane' (CH- 54B), Boeing Vertol 'Chinook' (CH-47C)

    DOT National Transportation Integrated Search

    1977-04-01

    The helicopter models used in this test program were the Hughes 300C, Hughes 500C, Bell 47-G, Bell 206-L, Bell 212 (UH-1N), Sikorsky S-61 (SH-3A), Sikorsky S-64 'Skycrane' (CH-54B), and Boeing Vertol 'Chinook' CH-47C. Volume I contains the measured n...

  12. Nuclear physics and cosmology

    NASA Technical Reports Server (NTRS)

    Schramm, David N.

    1989-01-01

    Nuclear physics has provided one of two critical observational tests of all Big Bang cosmology, namely Big Bang Nucleosynthesis. Furthermore, this same nuclear physics input enables a prediction to be made about one of the most fundamental physics questions of all, the number of elementary particle families. The standard Big Bang Nucleosynthesis arguments are reviewed. The primordial He abundance is inferred from He-C and He-N and He-O correlations. The strengthened Li constraint as well as D-2 plus He-3 are used to limit the baryon density. This limit is the key argument behind the need for non-baryonic dark matter. The allowed number of neutrino families, N(nu), is delineated using the new neutron lifetime value of tau(n) = 890 + or - 4s (tau(1/2) = 10.3 min). The formal statistical result is N(nu) = 2.6 + or - 0.3 (1 sigma), providing a reasonable fit (1.3 sigma) to three families but making a fourth light (m(nu) less than or equal to 10 MeV) neutrino family exceedly unlikely (approx. greater than 4.7 sigma). It is also shown that uncertainties induced by postulating a first-order quark-baryon phase transition do not seriously affect the conclusions.

  13. Big data need big theory too

    PubMed Central

    Dougherty, Edward R.; Highfield, Roger R.

    2016-01-01

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their ‘depth’ and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote ‘blind’ big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’. PMID:27698035

  14. Big data need big theory too.

    PubMed

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  15. Increased plasma levels of big-endothelin-2 and big-endothelin-3 in patients with end-stage renal disease.

    PubMed

    Miyauchi, Yumi; Sakai, Satoshi; Maeda, Seiji; Shimojo, Nobutake; Watanabe, Shigeyuki; Honma, Satoshi; Kuga, Keisuke; Aonuma, Kazutaka; Miyauchi, Takashi

    2012-10-15

    Big endothelins (pro-endothelin; inactive-precursor) are converted to biologically active endothelins (ETs). Mammals and humans produce three ET family members: ET-1, ET-2 and ET-3, from three different genes. Although ET-1 is produced by vascular endothelial cells, these cells do not produce ET-3, which is produced by neuronal cells and organs such as the thyroid, salivary gland and the kidney. In patients with end-stage renal disease, abnormal vascular endothelial cell function and elevated plasma ET-1 and big ET-1 levels have been reported. It is unknown whether big ET-2 and big ET-3 plasma levels are altered in these patients. The purpose of the present study was to determine whether endogenous ET-1, ET-2, and ET-3 systems including big ETs are altered in patients with end-stage renal disease. We measured plasma levels of ET-1, ET-3 and big ET-1, big ET-2, and big ET-3 in patients on chronic hemodialysis (n=23) and age-matched healthy subjects (n=17). In patients on hemodialysis, plasma levels (measured just before hemodialysis) of both ET-1 and ET-3 and big ET-1, big ET-2, and big ET-3 were markedly elevated, and the increase was higher for big ETs (Big ET-1, 4-fold; big ET-2, 6-fold; big ET-3: 5-fold) than for ETs (ET-1, 1.7-fold; ET-3, 2-fold). In hemodialysis patients, plasma levels of the inactive precursors big ET-1, big ET-2, and big ET-3 levels are markedly increased, yet there is only a moderate increase in plasma levels of the active products, ET-1 and ET-3. This suggests that the activity of endothelin converting enzyme contributing to circulating levels of ET-1 and ET-3 may be decreased in patients on chronic hemodialysis. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Big Data and medicine: a big deal?

    PubMed

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  17. Untapped Potential: Fulfilling the Promise of Big Brothers Big Sisters and the Bigs and Littles They Represent

    ERIC Educational Resources Information Center

    Bridgeland, John M.; Moore, Laura A.

    2010-01-01

    American children represent a great untapped potential in our country. For many young people, choices are limited and the goal of a productive adulthood is a remote one. This report paints a picture of who these children are, shares their insights and reflections about the barriers they face, and offers ways forward for Big Brothers Big Sisters as…

  18. Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.

    PubMed

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-12-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.

  19. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    ERIC Educational Resources Information Center

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  20. Implementing Big History.

    ERIC Educational Resources Information Center

    Welter, Mark

    2000-01-01

    Contends that world history should be taught as "Big History," a view that includes all space and time beginning with the Big Bang. Discusses five "Cardinal Questions" that serve as a course structure and address the following concepts: perspectives, diversity, change and continuity, interdependence, and causes. (CMK)

  1. Big data for health.

    PubMed

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  2. Correlating CFD Simulation with Wind Tunnel Test for the Full-Scale UH-60A Airloads Rotor

    NASA Technical Reports Server (NTRS)

    Romandr, Ethan; Norman, Thomas R.; Chang, I-Chung

    2011-01-01

    Data from the recent UH-60A Airloads Test in the National Full-Scale Aerodynamics Complex 40- by 80- Foot Wind Tunnel at NASA Ames Research Center are presented and compared to predictions computed by a loosely coupled Computational Fluid Dynamics (CFD)/Comprehensive analysis. Primary calculations model the rotor in free-air, but initial calculations are presented including a model of the tunnel test section. The conditions studied include a speed sweep at constant lift up to an advance ratio of 0.4 and a thrust sweep at constant speed into deep stall. Predictions show reasonable agreement with measurement for integrated performance indicators such as power and propulsive but occasionally deviate significantly. Detailed analysis of sectional airloads reveals good correlation in overall trends for normal force and pitching moment but pitching moment mean often differs. Chord force is frequently plagued by mean shifts and an overprediction of drag on the advancing side. Locations of significant aerodynamic phenomena are predicted accurately although the magnitude of individual events is often missed.

  3. Airloads Correlation of the UH-60A Rotor Inside the 40- by 80-Foot Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Chang, I-Chung; Norman, Thomas R.; Romander, Ethan A.

    2013-01-01

    The presented research validates the capability of a loosely-coupled computational fluid dynamics (CFD) and comprehensive rotorcraft analysis (CRA) code to calculate the flowfield around a rotor and test stand mounted inside a wind tunnel. The CFD/CRA predictions for the full-scale UH-60A Airloads Rotor inside the National Full-Scale Aerodynamics Complex (NFAC) 40- by 80-Foot Wind Tunnel at NASA Ames Research Center are compared with the latest measured airloads and performance data. The studied conditions include a speed sweep at constant lift up to an advance ratio of 0.4 and a thrust sweep at constant speed up to and including stall. For the speed sweep, wind tunnel modeling becomes important at advance ratios greater than 0.37 and test stand modeling becomes increasingly important as the advance ratio increases. For the thrust sweep, both the wind tunnel and test stand modeling become important as the rotor approaches stall. Despite the beneficial effects of modeling the wind tunnel and test stand, the new models do not completely resolve the current airload discrepancies between prediction and experiment.

  4. Introduction to Big Bang nucleosynthesis - Open and closed models, anisotropies

    NASA Astrophysics Data System (ADS)

    Tayler, R. J.

    1982-10-01

    A variety of observations suggest that the universe had a hot dense origin and that the pregalactic composition of the universe was determined by nuclear reactions that occurred in the first few minutes. There is no unique hot Big Bang theory, but the simplest version produces a primeval chemical composition that is in good qualitative agreement with the abundances deduced from observation. Whether or not any Big Bang theory will provide quantitative agreement with observations depends on a variety of factors in elementary particle physics (number and masses of stable or long-lived particles, half-life of neutron, structure of grand unified theories) and from observational astronomy (present mean baryon density of the universe, the Hubble constant and deceleration parameter). The influence of these factors on the abundances is discussed, as is the effect of departures from homogeneity and isotropy in the early universe.

  5. Where Big Data and Prediction Meet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, James; Brase, Jim M.; Hart, Bill

    Our ability to assemble and analyze massive data sets, often referred to under the title of “big data”, is an increasingly important tool for shaping national policy. This in turn has introduced issues from privacy concerns to cyber security. But as IBM’s John Kelly emphasized in the last Innovation, making sense of the vast arrays of data will require radically new computing tools. In the past, technologies and tools for analysis of big data were viewed as quite different from the traditional realm of high performance computing (HPC) with its huge models of phenomena such as global climate or supportingmore » the nuclear test moratorium. Looking ahead, this will change with very positive benefits for both worlds. Societal issues such as global security, economic planning and genetic analysis demand increased understanding that goes beyond existing data analysis and reduction. The modeling world often produces simulations that are complex compositions of mathematical models and experimental data. This has resulted in outstanding successes such as the annual assessment of the state of the US nuclear weapons stockpile without underground nuclear testing. Ironically, while there were historically many test conducted, this body of data provides only modest insight into the underlying physics of the system. A great deal of emphasis was thus placed on the level of confidence we can develop for the predictions. As data analytics and simulation come together, there is a growing need to assess the confidence levels in both data being gathered and the complex models used to make predictions. An example of this is assuring the security or optimizing the performance of critical infrastructure systems such as the power grid. If one wants to understand the vulnerabilities of the system or impacts of predicted threats, full scales tests of the grid against threat scenarios are unlikely. Preventive measures would need to be predicated on well-defined margins of confidence

  6. Big Data: Implications for Health System Pharmacy

    PubMed Central

    Stokes, Laura B.; Rogers, Joseph W.; Hertig, John B.; Weber, Robert J.

    2016-01-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services. PMID:27559194

  7. Big Data: Implications for Health System Pharmacy.

    PubMed

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  8. Investigation of Rotor Performance and Loads of a UH-60A Individual Blade Control System

    NASA Technical Reports Server (NTRS)

    Yeo, Hyeonsoo; Romander, Ethan A.; Norman, Thomas R.

    2011-01-01

    Wind tunnel measurements of performance, loads, and vibration of a full-scale UH-60A Black Hawk main rotor with an individual blade control (IBC) system are compared with calculations obtained using the comprehensive helicopter analysis CAMRAD II and a coupled CAMRAD II/OVERFLOW 2 analysis. Measured data show a 5.1% rotor power reduction (8.6% rotor lift to effective-drag ratio increase) using 2/rev IBC actuation with 2.0 amplitude at = 0.4. At the optimum IBC phase for rotor performance, IBC actuator force (pitch link force) decreased, and neither flap nor chord bending moments changed significantly. CAMRAD II predicts the rotor power variations with the IBC phase reasonably well at = 0.35. However, the correlation degrades at = 0.4. Coupled CAMRAD II/OVERFLOW 2 shows excellent correlation with the measured rotor power variations with the IBC phase at both = 0.35 and = 0.4. Maximum reduction of IBC actuator force is better predicted with CAMRAD II, but general trends are better captured with the coupled analysis. The correlation of vibratory hub loads is generally poor by both methods, although the coupled analysis somewhat captures general trends.

  9. Performance and Loads Correlation of a UH-60A Slowed Rotor at High Advance Ratios

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi B.

    2012-01-01

    Measured data from the slowed rotor part of the 2010 UH-60A Airloads Rotor test in the NASA Ames 40- by 80- Foot Wind Tunnel are compared with CAMRAD II calculations. The emphasis in this initial study is to correlate overall trends. This analytical effort considers advance ratios from 0.3 to 1.0, with the rotor rotational speed at 40%NR. The rotor performance parameters considered are the thrust coefficient, power coefficient, L/DE, torque, and H-force. The blade loads considered are the half peak-to-peak, mid-span and outboard torsion, flatwise, and chordwise moments, and the pitch link load. For advance ratios . 0.7, the overall trends for the performance and loads (excluding the pitch link load) could be captured, but with substantial overprediction or underprediction. The correlation gradually deteriorates as the advance ratio is increased and for advance ratios . 0.8 there is no correlation. The pitch link load correlation is not good. There is considerable scope for improvement in the prediction of the blade loads. Considering the modeling complexity associated with the unconventional operating condition under consideration, the current predictive ability to capture overall trends is encouraging.

  10. Investigation of Rotor Performance and Loads of a UH-60A Individual Blade Control System

    NASA Technical Reports Server (NTRS)

    Yeo, Hyeonsoo; Romander, Ethan A.; Norman, Thomas R.

    2011-01-01

    Wind tunnel measurements of performance, loads, and vibration of a full-scale UH-60A Black Hawk main rotor with an individual blade control (IBC) system are compared with calculations obtained using the comprehensive helicopter analysis CAMRAD II and a coupled CAMRAD II/OVERFLOW 2 analysis. Measured data show a 5.1% rotor power reduction (8.6% rotor lift to effective-drag ratio increase) using 2/rev IBC actuation with 2.0. amplitude at u = 0.4. At the optimum IBC phase for rotor performance, IBC actuator force (pitch link force) decreased, and neither flap nor chord bending moments changed significantly. CAMRAD II predicts the rotor power variations with IBC phase reasonably well at u = 0.35. However, the correlation degrades at u = 0.4. Coupled CAMRAD II/OVERFLOW 2 shows excellent correlation with the measured rotor power variations with IBC phase at both u = 0.35 and u = 0.4. Maximum reduction of IBC actuator force is better predicted with CAMRAD II, but general trends are better captured with the coupled analysis. The correlation of vibratory hub loads is generally poor by both methods, although the coupled analysis somewhat captures general trends.

  11. BigWig and BigBed: enabling browsing of large distributed datasets.

    PubMed

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  12. Nuclear waste viewed in a new light; a synchrotron study of uranium encapsulated in grout.

    PubMed

    Stitt, C A; Hart, M; Harker, N J; Hallam, K R; MacFarlane, J; Banos, A; Paraskevoulakos, C; Butcher, E; Padovani, C; Scott, T B

    2015-03-21

    How do you characterise the contents of a sealed nuclear waste package without breaking it open? This question is important when the contained corrosion products are potentially reactive with air and radioactive. Synchrotron X-rays have been used to perform micro-scale in-situ observation and characterisation of uranium encapsulated in grout; a simulation for a typical intermediate level waste storage packet. X-ray tomography and X-ray powder diffraction generated both qualitative and quantitative data from a grout-encapsulated uranium sample before, and after, deliberately constrained H2 corrosion. Tomographic reconstructions provided a means of assessing the extent, rates and character of the corrosion reactions by comparing the relative densities between the materials and the volume of reaction products. The oxidation of uranium in grout was found to follow the anoxic U+H2O oxidation regime, and the pore network within the grout was observed to influence the growth of uranium hydride sites across the metal surface. Powder diffraction analysis identified the corrosion products as UO2 and UH3, and permitted measurement of corrosion-induced strain. Together, X-ray tomography and diffraction provide means of accurately determining the types and extent of uranium corrosion occurring, thereby offering a future tool for isolating and studying the reactions occurring in real full-scale waste package systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Big Challenges and Big Opportunities: The Power of "Big Ideas" to Change Curriculum and the Culture of Teacher Planning

    ERIC Educational Resources Information Center

    Hurst, Chris

    2014-01-01

    Mathematical knowledge of pre-service teachers is currently "under the microscope" and the subject of research. This paper proposes a different approach to teacher content knowledge based on the "big ideas" of mathematics and the connections that exist within and between them. It is suggested that these "big ideas"…

  14. Countering misinformation concerning big sagebrush

    Treesearch

    Bruce L Welch; Craig Criddle

    2003-01-01

    This paper examines the scientific merits of eight axioms of range or vegetative management pertaining to big sagebrush. These axioms are: (1) Wyoming big sagebrush (Artemisia tridentata ssp. wyomingensis) does not naturally exceed 10 percent canopy cover and mountain big sagebrush (A. t. ssp. vaseyana) does not naturally exceed 20 percent canopy...

  15. BigNeuron dataset V.0.0

    DOE Data Explorer

    Ramanathan, Arvind

    2016-01-01

    The cleaned bench testing reconstructions for the gold166 datasets have been put online at github https://github.com/BigNeuron/Events-and-News/wiki/BigNeuron-Events-and-News https://github.com/BigNeuron/Data/releases/tag/gold166_bt_v1.0 The respective image datasets were released a while ago from other sites (major pointer is available at github as well https://github.com/BigNeuron/Data/releases/tag/Gold166_v1 but since the files were big, the actual downloading was distributed at 3 continents separately)

  16. Big data - a 21st century science Maginot Line? No-boundary thinking: shifting from the big data paradigm.

    PubMed

    Huang, Xiuzhen; Jennings, Steven F; Bruce, Barry; Buchan, Alison; Cai, Liming; Chen, Pengyin; Cramer, Carole L; Guan, Weihua; Hilgert, Uwe Kk; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Donald F; Nanduri, Bindu; Perkins, Andy; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Walker, Karl; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhang, Yu; Zhao, Zhongming; Moore, Jason H

    2015-01-01

    Whether your interests lie in scientific arenas, the corporate world, or in government, you have certainly heard the praises of big data: Big data will give you new insights, allow you to become more efficient, and/or will solve your problems. While big data has had some outstanding successes, many are now beginning to see that it is not the Silver Bullet that it has been touted to be. Here our main concern is the overall impact of big data; the current manifestation of big data is constructing a Maginot Line in science in the 21st century. Big data is not "lots of data" as a phenomena anymore; The big data paradigm is putting the spirit of the Maginot Line into lots of data. Big data overall is disconnecting researchers and science challenges. We propose No-Boundary Thinking (NBT), applying no-boundary thinking in problem defining to address science challenges.

  17. Challenges of Big Data Analysis.

    PubMed

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  18. Challenges of Big Data Analysis

    PubMed Central

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-01-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions. PMID:25419469

  19. Big Data and Chemical Education

    ERIC Educational Resources Information Center

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  20. Big data in fashion industry

    NASA Astrophysics Data System (ADS)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  1. The Big6 Collection: The Best of the Big6 Newsletter.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.; Berkowitz, Robert E.

    The Big6 is a complete approach to implementing meaningful learning and teaching of information and technology skills, essential for 21st century living. Including in-depth articles, practical tips, and explanations, this book offers a varied range of material about students and teachers, the Big6, and curriculum. The book is divided into 10 main…

  2. Big Data Bioinformatics

    PubMed Central

    GREENE, CASEY S.; TAN, JIE; UNG, MATTHEW; MOORE, JASON H.; CHENG, CHAO

    2017-01-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the “big data” era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both “machine learning” algorithms as well as “unsupervised” and “supervised” examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. PMID:27908398

  3. Big Data Bioinformatics

    PubMed Central

    GREENE, CASEY S.; TAN, JIE; UNG, MATTHEW; MOORE, JASON H.; CHENG, CHAO

    2017-01-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the “big data” era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both “machine learning” algorithms as well as “unsupervised” and “supervised” examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. PMID:24799088

  4. Big data bioinformatics.

    PubMed

    Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao

    2014-12-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. © 2014 Wiley Periodicals, Inc.

  5. Changing the personality of a face: Perceived Big Two and Big Five personality factors modeled in real photographs.

    PubMed

    Walker, Mirella; Vetter, Thomas

    2016-04-01

    General, spontaneous evaluations of strangers based on their faces have been shown to reflect judgments of these persons' intention and ability to harm. These evaluations can be mapped onto a 2D space defined by the dimensions trustworthiness (intention) and dominance (ability). Here we go beyond general evaluations and focus on more specific personality judgments derived from the Big Two and Big Five personality concepts. In particular, we investigate whether Big Two/Big Five personality judgments can be mapped onto the 2D space defined by the dimensions trustworthiness and dominance. Results indicate that judgments of the Big Two personality dimensions almost perfectly map onto the 2D space. In contrast, at least 3 of the Big Five dimensions (i.e., neuroticism, extraversion, and conscientiousness) go beyond the 2D space, indicating that additional dimensions are necessary to describe more specific face-based personality judgments accurately. Building on this evidence, we model the Big Two/Big Five personality dimensions in real facial photographs. Results from 2 validation studies show that the Big Two/Big Five are perceived reliably across different samples of faces and participants. Moreover, results reveal that participants differentiate reliably between the different Big Two/Big Five dimensions. Importantly, this high level of agreement and differentiation in personality judgments from faces likely creates a subjective reality which may have serious consequences for those being perceived-notably, these consequences ensue because the subjective reality is socially shared, irrespective of the judgments' validity. The methodological approach introduced here might prove useful in various psychological disciplines. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Treesearch

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  7. The Big Bang Theory

    ScienceCinema

    Lincoln, Don

    2018-01-16

    The Big Bang is the name of the most respected theory of the creation of the universe. Basically, the theory says that the universe was once smaller and denser and has been expending for eons. One common misconception is that the Big Bang theory says something about the instant that set the expansion into motion, however this isn’t true. In this video, Fermilab’s Dr. Don Lincoln tells about the Big Bang theory and sketches some speculative ideas about what caused the universe to come into existence.

  8. The Big Bang Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lincoln, Don

    The Big Bang is the name of the most respected theory of the creation of the universe. Basically, the theory says that the universe was once smaller and denser and has been expending for eons. One common misconception is that the Big Bang theory says something about the instant that set the expansion into motion, however this isn’t true. In this video, Fermilab’s Dr. Don Lincoln tells about the Big Bang theory and sketches some speculative ideas about what caused the universe to come into existence.

  9. Seeding considerations in restoring big sagebrush habitat

    Treesearch

    Scott M. Lambert

    2005-01-01

    This paper describes methods of managing or seeding to restore big sagebrush communities for wildlife habitat. The focus is on three big sagebrush subspecies, Wyoming big sagebrush (Artemisia tridentata ssp. wyomingensis), basin big sagebrush (Artemisia tridentata ssp. tridentata), and mountain...

  10. ARTIST CONCEPT - BIG JOE

    NASA Image and Video Library

    1963-09-01

    S63-19317 (October 1963) --- Pen and ink views of comparative arrangements of several capsules including the existing "Big Joe" design, the compromise "Big Joe" design, and the "Little Joe". All capsule designs are labeled and include dimensions. Photo credit: NASA

  11. Big Society, Big Deal?

    ERIC Educational Resources Information Center

    Thomson, Alastair

    2011-01-01

    Political leaders like to put forward guiding ideas or themes which pull their individual decisions into a broader narrative. For John Major it was Back to Basics, for Tony Blair it was the Third Way and for David Cameron it is the Big Society. While Mr. Blair relied on Lord Giddens to add intellectual weight to his idea, Mr. Cameron's legacy idea…

  12. Flight Regime Recognition Analysis for the Army UH-60A IMDS Usage

    DTIC Science & Technology

    2006-12-01

    61 Figure 34. The Behavior of The Parameter Weight.On.Wheels........................... 65 Figure 35. The ... Behavior of a Take-off Regime in Subsetting Process ................ 68 xi Figure 36. Subsetting the Big Data into Smaller Sets (WOW, Flags...of components can be extended to their true lifetime (Bechhoefer, n.d.) This is directly related to the accurate representation of regime

  13. Big Data Analytics in Medicine and Healthcare.

    PubMed

    Ristevski, Blagoj; Chen, Ming

    2018-05-10

    This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.

  14. The Big Bang Singularity

    NASA Astrophysics Data System (ADS)

    Ling, Eric

    The big bang theory is a model of the universe which makes the striking prediction that the universe began a finite amount of time in the past at the so called "Big Bang singularity." We explore the physical and mathematical justification of this surprising result. After laying down the framework of the universe as a spacetime manifold, we combine physical observations with global symmetrical assumptions to deduce the FRW cosmological models which predict a big bang singularity. Next we prove a couple theorems due to Stephen Hawking which show that the big bang singularity exists even if one removes the global symmetrical assumptions. Lastly, we investigate the conditions one needs to impose on a spacetime if one wishes to avoid a singularity. The ideas and concepts used here to study spacetimes are similar to those used to study Riemannian manifolds, therefore we compare and contrast the two geometries throughout.

  15. Medical big data: promise and challenges.

    PubMed

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  16. Medical big data: promise and challenges

    PubMed Central

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-01-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology. PMID:28392994

  17. Measuring the Promise of Big Data Syllabi

    ERIC Educational Resources Information Center

    Friedman, Alon

    2018-01-01

    Growing interest in Big Data is leading industries, academics and governments to accelerate Big Data research. However, how teachers should teach Big Data has not been fully examined. This article suggests criteria for redesigning Big Data syllabi in public and private degree-awarding higher education establishments. The author conducted a survey…

  18. The big bang

    NASA Astrophysics Data System (ADS)

    Silk, Joseph

    Our universe was born billions of years ago in a hot, violent explosion of elementary particles and radiation - the big bang. What do we know about this ultimate moment of creation, and how do we know it? Drawing upon the latest theories and technology, this new edition of The big bang, is a sweeping, lucid account of the event that set the universe in motion. Joseph Silk begins his story with the first microseconds of the big bang, on through the evolution of stars, galaxies, clusters of galaxies, quasars, and into the distant future of our universe. He also explores the fascinating evidence for the big bang model and recounts the history of cosmological speculation. Revised and updated, this new edition features all the most recent astronomical advances, including: Photos and measurements from the Hubble Space Telescope, Cosmic Background Explorer Satellite (COBE), and Infrared Space Observatory; the latest estimates of the age of the universe; new ideas in string and superstring theory; recent experiments on neutrino detection; new theories about the presence of dark matter in galaxies; new developments in the theory of the formation and evolution of galaxies; the latest ideas about black holes, worm holes, quantum foam, and multiple universes.

  19. Deep mixing of 3He: reconciling Big Bang and stellar nucleosynthesis.

    PubMed

    Eggleton, Peter P; Dearborn, David S P; Lattanzio, John C

    2006-12-08

    Low-mass stars, approximately 1 to 2 solar masses, near the Main Sequence are efficient at producing the helium isotope 3He, which they mix into the convective envelope on the giant branch and should distribute into the Galaxy by way of envelope loss. This process is so efficient that it is difficult to reconcile the low observed cosmic abundance of 3He with the predictions of both stellar and Big Bang nucleosynthesis. Here we find, by modeling a red giant with a fully three-dimensional hydrodynamic code and a full nucleosynthetic network, that mixing arises in the supposedly stable and radiative zone between the hydrogen-burning shell and the base of the convective envelope. This mixing is due to Rayleigh-Taylor instability within a zone just above the hydrogen-burning shell, where a nuclear reaction lowers the mean molecular weight slightly. Thus, we are able to remove the threat that 3He production in low-mass stars poses to the Big Bang nucleosynthesis of 3He.

  20. A comparison of lightning and nuclear electromagnetic pulse response of a helicopter

    NASA Technical Reports Server (NTRS)

    Easterbrook, C. C.; Perala, R. A.

    1984-01-01

    A numerical modeling technique is utilized to investigate the response of a UH-60A helicopter to both lightning and nuclear electromagnetic pulses (NEMP). The analytical approach involves the three-dimensional time domain finite-difference solutions of Maxwell's equations. Both the external currents and charges as well as the internal electromagnetic fields and cable responses are computed. Results of the analysis indicate that, in general, the short circuit current on internal cables is larger for lightning, whereas the open-circuit voltages are slightly higher for NEMP. The lightning response is highly dependent upon the rise time of the injected current as was expected. The analysis shows that a coupling levels to cables in a helicopter are 20 to 30 dB larger than those observed in fixed-wing aircraft.

  1. Big-BOE: Fusing Spanish Official Gazette with Big Data Technology.

    PubMed

    Basanta-Val, Pablo; Sánchez-Fernández, Luis

    2018-06-01

    The proliferation of new data sources, stemmed from the adoption of open-data schemes, in combination with an increasing computing capacity causes the inception of new type of analytics that process Internet of things with low-cost engines to speed up data processing using parallel computing. In this context, the article presents an initiative, called BIG-Boletín Oficial del Estado (BOE), designed to process the Spanish official government gazette (BOE) with state-of-the-art processing engines, to reduce computation time and to offer additional speed up for big data analysts. The goal of including a big data infrastructure is to be able to process different BOE documents in parallel with specific analytics, to search for several issues in different documents. The application infrastructure processing engine is described from an architectural perspective and from performance, showing evidence on how this type of infrastructure improves the performance of different types of simple analytics as several machines cooperate.

  2. Big Data's Role in Precision Public Health.

    PubMed

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  3. Antigravity and the big crunch/big bang transition

    NASA Astrophysics Data System (ADS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-08-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  4. Restoring Wyoming big sagebrush

    Treesearch

    Cindy R. Lysne

    2005-01-01

    The widespread occurrence of big sagebrush can be attributed to many adaptive features. Big sagebrush plays an essential role in its communities by providing wildlife habitat, modifying local environmental conditions, and facilitating the reestablishment of native herbs. Currently, however, many sagebrush steppe communities are highly fragmented. As a result, restoring...

  5. Exploiting big data for critical care research.

    PubMed

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  6. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig) proteins.

    PubMed

    Raman, Rajeev; Rajanikanth, V; Palaniappan, Raghavan U M; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P; Sharma, Yogendra; Chang, Yung-Fu

    2010-12-29

    Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th) (Lig A9) and 10(th) repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  7. Metal atom dynamics in superbulky metallocenes: a comparison of (Cp(BIG))2Sn and (Cp(BIG))2Eu.

    PubMed

    Harder, Sjoerd; Naglav, Dominik; Schwerdtfeger, Peter; Nowik, Israel; Herber, Rolfe H

    2014-02-17

    Cp(BIG)2Sn (Cp(BIG) = (4-n-Bu-C6H4)5cyclopentadienyl), prepared by reaction of 2 equiv of Cp(BIG)Na with SnCl2, crystallized isomorphous to other known metallocenes with this ligand (Ca, Sr, Ba, Sm, Eu, Yb). Similarly, it shows perfect linearity, C-H···C(π) bonding between the Cp(BIG) rings and out-of-plane bending of the aryl substituents toward the metal. Whereas all other Cp(BIG)2M complexes show large disorder in the metal position, the Sn atom in Cp(BIG)2Sn is perfectly ordered. In contrast, (119)Sn and (151)Eu Mößbauer investigations on the corresponding Cp(BIG)2M metallocenes show that Sn(II) is more dynamic and loosely bound than Eu(II). The large displacement factors in the group 2 and especially in the lanthanide(II) metallocenes Cp(BIG)2M can be explained by static metal disorder in a plane parallel to the Cp(BIG) rings. Despite parallel Cp(BIG) rings, these metallocenes have a nonlinear Cpcenter-M-Cpcenter geometry. This is explained by an ionic model in which metal atoms are polarized by the negatively charged Cp rings. The extent of nonlinearity is in line with trends found in M(2+) ion polarizabilities. The range of known calculated dipole polarizabilities at the Douglas-Kroll CCSD(T) level was extended with values (atomic units) for Sn(2+) 15.35, Sm(2+)(4f(6) (7)F) 9.82, Eu(2+)(4f(7) (8)S) 8.99, and Yb(2+)(4f(14) (1)S) 6.55. This polarizability model cannot be applied to predominantly covalently bound Cp(BIG)2Sn, which shows a perfectly ordered structure. The bent geometry of Cp*2Sn should therefore not be explained by metal polarizability but is due to van der Waals Cp*···Cp* attraction and (to some extent) to a small p-character component in the Sn lone pair.

  8. Big Joe Capsule Assembly Activities

    NASA Image and Video Library

    1959-08-01

    Big Joe Capsule Assembly Activities in 1959 at NASA Glenn Research Center (formerly NASA Lewis). Big Joe was an Atlas missile that successfully launched a boilerplate model of the Mercury capsule on September 9, 1959.

  9. Urgent Call for Nursing Big Data.

    PubMed

    Delaney, Connie W

    2016-01-01

    The purpose of this panel is to expand internationally a National Action Plan for sharable and comparable nursing data for quality improvement and big data science. There is an urgent need to assure that nursing has sharable and comparable data for quality improvement and big data science. A national collaborative - Nursing Knowledge and Big Data Science includes multi-stakeholder groups focused on a National Action Plan toward implementing and using sharable and comparable nursing big data. Panelists will share accomplishments and future plans with an eye toward international collaboration. This presentation is suitable for any audience attending the NI2016 conference.

  10. bigSCale: an analytical framework for big-scale single-cell data.

    PubMed

    Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger

    2018-06-01

    Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor

  11. [Big data in medicine and healthcare].

    PubMed

    Rüping, Stefan

    2015-08-01

    Healthcare is one of the business fields with the highest Big Data potential. According to the prevailing definition, Big Data refers to the fact that data today is often too large and heterogeneous and changes too quickly to be stored, processed, and transformed into value by previous technologies. The technological trends drive Big Data: business processes are more and more executed electronically, consumers produce more and more data themselves - e.g. in social networks - and finally ever increasing digitalization. Currently, several new trends towards new data sources and innovative data analysis appear in medicine and healthcare. From the research perspective, omics-research is one clear Big Data topic. In practice, the electronic health records, free open data and the "quantified self" offer new perspectives for data analytics. Regarding analytics, significant advances have been made in the information extraction from text data, which unlocks a lot of data from clinical documentation for analytics purposes. At the same time, medicine and healthcare is lagging behind in the adoption of Big Data approaches. This can be traced to particular problems regarding data complexity and organizational, legal, and ethical challenges. The growing uptake of Big Data in general and first best-practice examples in medicine and healthcare in particular, indicate that innovative solutions will be coming. This paper gives an overview of the potentials of Big Data in medicine and healthcare.

  12. High School Students as Mentors: Findings from the Big Brothers Big Sisters School-Based Mentoring Impact Study

    ERIC Educational Resources Information Center

    Herrera, Carla; Kauh, Tina J.; Cooney, Siobhan M.; Grossman, Jean Baldwin; McMaken, Jennifer

    2008-01-01

    High schools have recently become a popular source of mentors for school-based mentoring (SBM) programs. The high school Bigs program of Big Brothers Big Sisters of America, for example, currently involves close to 50,000 high-school-aged mentors across the country. While the use of these young mentors has several potential advantages, their age…

  13. Making big sense from big data in toxicology by read-across.

    PubMed

    Hartung, Thomas

    2016-01-01

    Modern information technologies have made big data available in safety sciences, i.e., extremely large data sets that may be analyzed only computationally to reveal patterns, trends and associations. This happens by (1) compilation of large sets of existing data, e.g., as a result of the European REACH regulation, (2) the use of omics technologies and (3) systematic robotized testing in a high-throughput manner. All three approaches and some other high-content technologies leave us with big data--the challenge is now to make big sense of these data. Read-across, i.e., the local similarity-based intrapolation of properties, is gaining momentum with increasing data availability and consensus on how to process and report it. It is predominantly applied to in vivo test data as a gap-filling approach, but can similarly complement other incomplete datasets. Big data are first of all repositories for finding similar substances and ensure that the available data is fully exploited. High-content and high-throughput approaches similarly require focusing on clusters, in this case formed by underlying mechanisms such as pathways of toxicity. The closely connected properties, i.e., structural and biological similarity, create the confidence needed for predictions of toxic properties. Here, a new web-based tool under development called REACH-across, which aims to support and automate structure-based read-across, is presented among others.

  14. [Big data in official statistics].

    PubMed

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  15. Considerations on Geospatial Big Data

    NASA Astrophysics Data System (ADS)

    LIU, Zhen; GUO, Huadong; WANG, Changlin

    2016-11-01

    Geospatial data, as a significant portion of big data, has recently gained the full attention of researchers. However, few researchers focus on the evolution of geospatial data and its scientific research methodologies. When entering into the big data era, fully understanding the changing research paradigm associated with geospatial data will definitely benefit future research on big data. In this paper, we look deep into these issues by examining the components and features of geospatial big data, reviewing relevant scientific research methodologies, and examining the evolving pattern of geospatial data in the scope of the four ‘science paradigms’. This paper proposes that geospatial big data has significantly shifted the scientific research methodology from ‘hypothesis to data’ to ‘data to questions’ and it is important to explore the generality of growing geospatial data ‘from bottom to top’. Particularly, four research areas that mostly reflect data-driven geospatial research are proposed: spatial correlation, spatial analytics, spatial visualization, and scientific knowledge discovery. It is also pointed out that privacy and quality issues of geospatial data may require more attention in the future. Also, some challenges and thoughts are raised for future discussion.

  16. Standard big bang nucleosynthesis and primordial CNO abundances after Planck

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coc, Alain; Uzan, Jean-Philippe; Vangioni, Elisabeth, E-mail: coc@csnsm.in2p3.fr, E-mail: uzan@iap.fr, E-mail: vangioni@iap.fr

    Primordial or big bang nucleosynthesis (BBN) is one of the three historical strong evidences for the big bang model. The recent results by the Planck satellite mission have slightly changed the estimate of the baryonic density compared to the previous WMAP analysis. This article updates the BBN predictions for the light elements using the cosmological parameters determined by Planck, as well as an improvement of the nuclear network and new spectroscopic observations. There is a slight lowering of the primordial Li/H abundance, however, this lithium value still remains typically 3 times larger than its observed spectroscopic abundance in halo starsmore » of the Galaxy. According to the importance of this ''lithium problem{sup ,} we trace the small changes in its BBN calculated abundance following updates of the baryonic density, neutron lifetime and networks. In addition, for the first time, we provide confidence limits for the production of {sup 6}Li, {sup 9}Be, {sup 11}B and CNO, resulting from our extensive Monte Carlo calculation with our extended network. A specific focus is cast on CNO primordial production. Considering uncertainties on the nuclear rates around the CNO formation, we obtain CNO/H ≈ (5-30)×10{sup -15}. We further improve this estimate by analyzing correlations between yields and reaction rates and identified new influential reaction rates. These uncertain rates, if simultaneously varied could lead to a significant increase of CNO production: CNO/H∼10{sup -13}. This result is important for the study of population III star formation during the dark ages.« less

  17. Big-Leaf Mahogany on CITES Appendix II: Big Challenge, Big Opportunity

    Treesearch

    JAMES GROGAN; PAULO BARRETO

    2005-01-01

    On 15 November 2003, big-leaf mahogany (Swietenia macrophylla King, Meliaceae), the most valuable widely traded Neotropical timber tree, gained strengthened regulatory protection from its listing on Appendix II of the Convention on International Trade in Endangered Species ofWild Fauna and Flora (CITES). CITES is a United Nations-chartered agreement signed by 164...

  18. Big Data in Medicine is Driving Big Changes

    PubMed Central

    Verspoor, K.

    2014-01-01

    Summary Objectives To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies. PMID:25123716

  19. Health Informatics Scientists' Perception About Big Data Technology.

    PubMed

    Minou, John; Routsis, Fotios; Gallos, Parisis; Mantas, John

    2017-01-01

    The aim of this paper is to present the perceptions of the Health Informatics Scientists about the Big Data Technology in Healthcare. An empirical study was conducted among 46 scientists to assess their knowledge about the Big Data Technology and their perceptions about using this technology in healthcare. Based on the study findings, 86.7% of the scientists had knowledge of Big data Technology. Furthermore, 59.1% of the scientists believed that Big Data Technology refers to structured data. Additionally, 100% of the population believed that Big Data Technology can be implemented in Healthcare. Finally, the majority does not know any cases of use of Big Data Technology in Greece while 57,8% of the them mentioned that they knew use cases of the Big Data Technology abroad.

  20. Harnessing the Power of Big Data to Improve Graduate Medical Education: Big Idea or Bust?

    PubMed

    Arora, Vineet M

    2018-06-01

    With the advent of electronic medical records (EMRs) fueling the rise of big data, the use of predictive analytics, machine learning, and artificial intelligence are touted as transformational tools to improve clinical care. While major investments are being made in using big data to transform health care delivery, little effort has been directed toward exploiting big data to improve graduate medical education (GME). Because our current system relies on faculty observations of competence, it is not unreasonable to ask whether big data in the form of clinical EMRs and other novel data sources can answer questions of importance in GME such as when is a resident ready for independent practice.The timing is ripe for such a transformation. A recent National Academy of Medicine report called for reforms to how GME is delivered and financed. While many agree on the need to ensure that GME meets our nation's health needs, there is little consensus on how to measure the performance of GME in meeting this goal. During a recent workshop at the National Academy of Medicine on GME outcomes and metrics in October 2017, a key theme emerged: Big data holds great promise to inform GME performance at individual, institutional, and national levels. In this Invited Commentary, several examples are presented, such as using big data to inform clinical experience and provide clinically meaningful data to trainees, and using novel data sources, including ambient data, to better measure the quality of GME training.

  1. A SWOT Analysis of Big Data

    ERIC Educational Resources Information Center

    Ahmadi, Mohammad; Dileepan, Parthasarati; Wheatley, Kathleen K.

    2016-01-01

    This is the decade of data analytics and big data, but not everyone agrees with the definition of big data. Some researchers see it as the future of data analysis, while others consider it as hype and foresee its demise in the near future. No matter how it is defined, big data for the time being is having its glory moment. The most important…

  2. A survey of big data research

    PubMed Central

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang, Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions. PMID:26504265

  3. Software Architecture for Big Data Systems

    DTIC Science & Technology

    2014-03-27

    Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University Software Architecture for Big Data Systems...AND SUBTITLE Software Architecture for Big Data Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...ih - . Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University WHAT IS BIG DATA ? FROM A SOFTWARE

  4. 78 FR 3911 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-17

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N259; FXRS1265030000-134-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive... significant impact (FONSI) for the environmental assessment (EA) for Big Stone National Wildlife Refuge...

  5. Big Domains Are Novel Ca2+-Binding Modules: Evidences from Big Domains of Leptospira Immunoglobulin-Like (Lig) Proteins

    PubMed Central

    Palaniappan, Raghavan U. M.; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P.; Sharma, Yogendra; Chang, Yung-Fu

    2010-01-01

    Background Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca2+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca2+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. Principal Findings We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9th (Lig A9) and 10th repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca2+ with dissociation constants of 2–4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. Conclusions We demonstrate that the Lig are Ca2+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca2+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca2+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca2+ binding. PMID:21206924

  6. Big sagebrush seed bank densities following wildfires

    USDA-ARS?s Scientific Manuscript database

    Big sagebrush (Artemisia spp.) is a critical shrub to many wildlife species including sage grouse (Centrocercus urophasianus), mule deer (Odocoileus hemionus), and pygmy rabbit (Brachylagus idahoensis). Big sagebrush is killed by wildfires and big sagebrush seed is generally short-lived and do not s...

  7. Proposal for the determination of nuclear masses by high-precision spectroscopy of Rydberg states

    NASA Astrophysics Data System (ADS)

    Wundt, B. J.; Jentschura, U. D.

    2010-06-01

    The theoretical treatment of Rydberg states in one-electron ions is facilitated by the virtual absence of the nuclear-size correction, and fundamental constants like the Rydberg constant may be in the reach of planned high-precision spectroscopic experiments. The dominant nuclear effect that shifts transition energies among Rydberg states therefore is due to the nuclear mass. As a consequence, spectroscopic measurements of Rydberg transitions can be used in order to precisely deduce nuclear masses. A possible application of this approach to hydrogen and deuterium, and hydrogen-like lithium and carbon is explored in detail. In order to complete the analysis, numerical and analytic calculations of the quantum electrodynamic self-energy remainder function for states with principal quantum number n = 5, ..., 8 and with angular momentum ell = n - 1 and ell = n - 2 are described \\big(j = \\ell \\pm {\\textstyle {\\frac{1}{2}}}\\big).

  8. Epidemiology in wonderland: Big Data and precision medicine.

    PubMed

    Saracci, Rodolfo

    2018-03-01

    Big Data and precision medicine, two major contemporary challenges for epidemiology, are critically examined from two different angles. In Part 1 Big Data collected for research purposes (Big research Data) and Big Data used for research although collected for other primary purposes (Big secondary Data) are discussed in the light of the fundamental common requirement of data validity, prevailing over "bigness". Precision medicine is treated developing the key point that high relative risks are as a rule required to make a variable or combination of variables suitable for prediction of disease occurrence, outcome or response to treatment; the commercial proliferation of allegedly predictive tests of unknown or poor validity is commented. Part 2 proposes a "wise epidemiology" approach to: (a) choosing in a context imprinted by Big Data and precision medicine-epidemiological research projects actually relevant to population health, (b) training epidemiologists, (c) investigating the impact on clinical practices and doctor-patient relation of the influx of Big Data and computerized medicine and (d) clarifying whether today "health" may be redefined-as some maintain in purely technological terms.

  9. Big Data and Analytics in Healthcare.

    PubMed

    Tan, S S-L; Gao, G; Koch, S

    2015-01-01

    This editorial is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". The amount of data being generated in the healthcare industry is growing at a rapid rate. This has generated immense interest in leveraging the availability of healthcare data (and "big data") to improve health outcomes and reduce costs. However, the nature of healthcare data, and especially big data, presents unique challenges in processing and analyzing big data in healthcare. This Focus Theme aims to disseminate some novel approaches to address these challenges. More specifically, approaches ranging from efficient methods of processing large clinical data to predictive models that could generate better predictions from healthcare data are presented.

  10. "Big data" in economic history.

    PubMed

    Gutmann, Myron P; Merchant, Emily Klancher; Roberts, Evan

    2018-03-01

    Big data is an exciting prospect for the field of economic history, which has long depended on the acquisition, keying, and cleaning of scarce numerical information about the past. This article examines two areas in which economic historians are already using big data - population and environment - discussing ways in which increased frequency of observation, denser samples, and smaller geographic units allow us to analyze the past with greater precision and often to track individuals, places, and phenomena across time. We also explore promising new sources of big data: organically created economic data, high resolution images, and textual corpora.

  11. Big Data and Ambulatory Care

    PubMed Central

    Thorpe, Jane Hyatt; Gray, Elizabeth Alexandra

    2015-01-01

    Big data is heralded as having the potential to revolutionize health care by making large amounts of data available to support care delivery, population health, and patient engagement. Critics argue that big data's transformative potential is inhibited by privacy requirements that restrict health information exchange. However, there are a variety of permissible activities involving use and disclosure of patient information that support care delivery and management. This article presents an overview of the legal framework governing health information, dispels misconceptions about privacy regulations, and highlights how ambulatory care providers in particular can maximize the utility of big data to improve care. PMID:25401945

  12. Big Data Knowledge in Global Health Education.

    PubMed

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  13. Nuclear envelope rupture: little holes, big openings.

    PubMed

    Hatch, Emily M

    2018-06-01

    The nuclear envelope (NE), which is a critical barrier between the DNA and the cytosol, is capable of extensive dynamic membrane remodeling events in interphase. One of these events, interphase NE rupture and repair, can occur in both normal and disease states and results in the loss of nucleus compartmentalization. NE rupture is not lethal, but new research indicates that it could have broad impacts on genome stability and activate innate immune responses. These observations suggest a new model for how changes in NE structure could be pathogenic in cancer, laminopathies, and autoinflammatory syndromes, and redefine the functions of nucleus compartmentalization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Big data for bipolar disorder.

    PubMed

    Monteith, Scott; Glenn, Tasha; Geddes, John; Whybrow, Peter C; Bauer, Michael

    2016-12-01

    The delivery of psychiatric care is changing with a new emphasis on integrated care, preventative measures, population health, and the biological basis of disease. Fundamental to this transformation are big data and advances in the ability to analyze these data. The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events. The primary sources of big data today are electronic medical records (EMR), claims, and registry data from providers and payers. In the near future, data created by patients from active monitoring, passive monitoring of Internet and smartphone activities, and from sensors may be integrated with the EMR. Diverse data sources from outside of medicine, such as government financial data, will be linked for research. Over the long term, genetic and imaging data will be integrated with the EMR, and there will be more emphasis on predictive models. Many technical challenges remain when analyzing big data that relates to size, heterogeneity, complexity, and unstructured text data in the EMR. Human judgement and subject matter expertise are critical parts of big data analysis, and the active participation of psychiatrists is needed throughout the analytical process.

  15. GEOSS: Addressing Big Data Challenges

    NASA Astrophysics Data System (ADS)

    Nativi, S.; Craglia, M.; Ochiai, O.

    2014-12-01

    In the sector of Earth Observation, the explosion of data is due to many factors including: new satellite constellations, the increased capabilities of sensor technologies, social media, crowdsourcing, and the need for multidisciplinary and collaborative research to face Global Changes. In this area, there are many expectations and concerns about Big Data. Vendors have attempted to use this term for their commercial purposes. It is necessary to understand whether Big Data is a radical shift or an incremental change for the existing digital infrastructures. This presentation tries to explore and discuss the impact of Big Data challenges and new capabilities on the Global Earth Observation System of Systems (GEOSS) and particularly on its common digital infrastructure called GCI. GEOSS is a global and flexible network of content providers allowing decision makers to access an extraordinary range of data and information at their desk. The impact of the Big Data dimensionalities (commonly known as 'V' axes: volume, variety, velocity, veracity, visualization) on GEOSS is discussed. The main solutions and experimentation developed by GEOSS along these axes are introduced and analyzed. GEOSS is a pioneering framework for global and multidisciplinary data sharing in the Earth Observation realm; its experience on Big Data is valuable for the many lessons learned.

  16. Big Questions: Missing Antimatter

    ScienceCinema

    Lincoln, Don

    2018-06-08

    Einstein's equation E = mc2 is often said to mean that energy can be converted into matter. More accurately, energy can be converted to matter and antimatter. During the first moments of the Big Bang, the universe was smaller, hotter and energy was everywhere. As the universe expanded and cooled, the energy converted into matter and antimatter. According to our best understanding, these two substances should have been created in equal quantities. However when we look out into the cosmos we see only matter and no antimatter. The absence of antimatter is one of the Big Mysteries of modern physics. In this video, Fermilab's Dr. Don Lincoln explains the problem, although doesn't answer it. The answer, as in all Big Mysteries, is still unknown and one of the leading research topics of contemporary science.

  17. Big data in biomedicine.

    PubMed

    Costa, Fabricio F

    2014-04-01

    The increasing availability and growth rate of biomedical information, also known as 'big data', provides an opportunity for future personalized medicine programs that will significantly improve patient care. Recent advances in information technology (IT) applied to biomedicine are changing the landscape of privacy and personal information, with patients getting more control of their health information. Conceivably, big data analytics is already impacting health decisions and patient care; however, specific challenges need to be addressed to integrate current discoveries into medical practice. In this article, I will discuss the major breakthroughs achieved in combining omics and clinical health data in terms of their application to personalized medicine. I will also review the challenges associated with using big data in biomedicine and translational science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Big Data’s Role in Precision Public Health

    PubMed Central

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091

  19. Big data in forensic science and medicine.

    PubMed

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  20. Deep Mixing of 3He: Reconciling Big Bang and Stellar Nucleosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eggleton, P P; Dearborn, D P; Lattanzio, J

    2006-07-26

    Low-mass stars, {approx} 1-2 solar masses, near the Main Sequence are efficient at producing {sup 3}He, which they mix into the convective envelope on the giant branch and should distribute into the Galaxy by way of envelope loss. This process is so efficient that it is difficult to reconcile the low observed cosmic abundance of {sup 3}He with the predictions of both stellar and Big Bang nucleosynthesis. In this paper we find, by modeling a red giant with a fully three-dimensional hydrodynamic code and a full nucleosynthetic network, that mixing arises in the supposedly stable and radiative zone between themore » hydrogen-burning shell and the base of the convective envelope. This mixing is due to Rayleigh-Taylor instability within a zone just above the hydrogen-burning shell, where a nuclear reaction lowers the mean molecular weight slightly. Thus we are able to remove the threat that {sup 3}He production in low-mass stars poses to the Big Bang nucleosynthesis of {sup 3}He.« less

  1. Big Data and Perioperative Nursing.

    PubMed

    Westra, Bonnie L; Peterson, Jessica J

    2016-10-01

    Big data are large volumes of digital data that can be collected from disparate sources and are challenging to analyze. These data are often described with the five "Vs": volume, velocity, variety, veracity, and value. Perioperative nurses contribute to big data through documentation in the electronic health record during routine surgical care, and these data have implications for clinical decision making, administrative decisions, quality improvement, and big data science. This article explores methods to improve the quality of perioperative nursing data and provides examples of how these data can be combined with broader nursing data for quality improvement. We also discuss a national action plan for nursing knowledge and big data science and how perioperative nurses can engage in collaborative actions to transform health care. Standardized perioperative nursing data has the potential to affect care far beyond the original patient. Copyright © 2016 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  2. Modeling in Big Data Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Szymczak, Samantha; Gunning, Dave

    Human-Centered Big Data Research (HCBDR) is an area of work that focuses on the methodologies and research areas focused on understanding how humans interact with “big data”. In the context of this paper, we refer to “big data” in a holistic sense, including most (if not all) the dimensions defining the term, such as complexity, variety, velocity, veracity, etc. Simply put, big data requires us as researchers of to question and reconsider existing approaches, with the opportunity to illuminate new kinds of insights that were traditionally out of reach to humans. The purpose of this article is to summarize themore » discussions and ideas about the role of models in HCBDR at a recent workshop. Models, within the context of this paper, include both computational and conceptual mental models. As such, the discussions summarized in this article seek to understand the connection between these two categories of models.« less

  3. NASA's Big Data Task Force

    NASA Astrophysics Data System (ADS)

    Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.

    2017-12-01

    Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.

  4. Big Data Technologies

    PubMed Central

    Bellazzi, Riccardo; Dagliati, Arianna; Sacchi, Lucia; Segagni, Daniele

    2015-01-01

    The so-called big data revolution provides substantial opportunities to diabetes management. At least 3 important directions are currently of great interest. First, the integration of different sources of information, from primary and secondary care to administrative information, may allow depicting a novel view of patient’s care processes and of single patient’s behaviors, taking into account the multifaceted nature of chronic care. Second, the availability of novel diabetes technologies, able to gather large amounts of real-time data, requires the implementation of distributed platforms for data analysis and decision support. Finally, the inclusion of geographical and environmental information into such complex IT systems may further increase the capability of interpreting the data gathered and extract new knowledge from them. This article reviews the main concepts and definitions related to big data, it presents some efforts in health care, and discusses the potential role of big data in diabetes care. Finally, as an example, it describes the research efforts carried on in the MOSAIC project, funded by the European Commission. PMID:25910540

  5. The Berlin Inventory of Gambling behavior - Screening (BIG-S): Validation using a clinical sample.

    PubMed

    Wejbera, Martin; Müller, Kai W; Becker, Jan; Beutel, Manfred E

    2017-05-18

    Published diagnostic questionnaires for gambling disorder in German are either based on DSM-III criteria or focus on aspects other than life time prevalence. This study was designed to assess the usability of the DSM-IV criteria based Berlin Inventory of Gambling Behavior Screening tool in a clinical sample and adapt it to DSM-5 criteria. In a sample of 432 patients presenting for behavioral addiction assessment at the University Medical Center Mainz, we checked the screening tool's results against clinical diagnosis and compared a subsample of n=300 clinically diagnosed gambling disorder patients with a comparison group of n=132. The BIG-S produced a sensitivity of 99.7% and a specificity of 96.2%. The instrument's unidimensionality and the diagnostic improvements of DSM-5 criteria were verified by exploratory and confirmatory factor analysis as well as receiver operating characteristic analysis. The BIG-S is a reliable and valid screening tool for gambling disorder and demonstrated its concise and comprehensible operationalization of current DSM-5 criteria in a clinical setting.

  6. Traffic information computing platform for big data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  7. Quantum nature of the big bang.

    PubMed

    Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parampreet

    2006-04-14

    Some long-standing issues concerning the quantum nature of the big bang are resolved in the context of homogeneous isotropic models with a scalar field. Specifically, the known results on the resolution of the big-bang singularity in loop quantum cosmology are significantly extended as follows: (i) the scalar field is shown to serve as an internal clock, thereby providing a detailed realization of the "emergent time" idea; (ii) the physical Hilbert space, Dirac observables, and semiclassical states are constructed rigorously; (iii) the Hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce. Thanks to the nonperturbative, background independent methods, unlike in other approaches the quantum evolution is deterministic across the deep Planck regime.

  8. Mentoring in Schools: An Impact Study of Big Brothers Big Sisters School-Based Mentoring

    ERIC Educational Resources Information Center

    Herrera, Carla; Grossman, Jean Baldwin; Kauh, Tina J.; McMaken, Jennifer

    2011-01-01

    This random assignment impact study of Big Brothers Big Sisters School-Based Mentoring involved 1,139 9- to 16-year-old students in 10 cities nationwide. Youth were randomly assigned to either a treatment group (receiving mentoring) or a control group (receiving no mentoring) and were followed for 1.5 school years. At the end of the first school…

  9. Big data processing in the cloud - Challenges and platforms

    NASA Astrophysics Data System (ADS)

    Zhelev, Svetoslav; Rozeva, Anna

    2017-12-01

    Choosing the appropriate architecture and technologies for a big data project is a difficult task, which requires extensive knowledge in both the problem domain and in the big data landscape. The paper analyzes the main big data architectures and the most widely implemented technologies used for processing and persisting big data. Clouds provide for dynamic resource scaling, which makes them a natural fit for big data applications. Basic cloud computing service models are presented. Two architectures for processing big data are discussed, Lambda and Kappa architectures. Technologies for big data persistence are presented and analyzed. Stream processing as the most important and difficult to manage is outlined. The paper highlights main advantages of cloud and potential problems.

  10. Ethics and Epistemology in Big Data Research.

    PubMed

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian; Ioannidis, John P A

    2017-12-01

    Biomedical innovation and translation are increasingly emphasizing research using "big data." The hope is that big data methods will both speed up research and make its results more applicable to "real-world" patients and health services. While big data research has been embraced by scientists, politicians, industry, and the public, numerous ethical, organizational, and technical/methodological concerns have also been raised. With respect to technical and methodological concerns, there is a view that these will be resolved through sophisticated information technologies, predictive algorithms, and data analysis techniques. While such advances will likely go some way towards resolving technical and methodological issues, we believe that the epistemological issues raised by big data research have important ethical implications and raise questions about the very possibility of big data research achieving its goals.

  11. Big Questions: Missing Antimatter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lincoln, Don

    2013-08-27

    Einstein's equation E = mc2 is often said to mean that energy can be converted into matter. More accurately, energy can be converted to matter and antimatter. During the first moments of the Big Bang, the universe was smaller, hotter and energy was everywhere. As the universe expanded and cooled, the energy converted into matter and antimatter. According to our best understanding, these two substances should have been created in equal quantities. However when we look out into the cosmos we see only matter and no antimatter. The absence of antimatter is one of the Big Mysteries of modern physics.more » In this video, Fermilab's Dr. Don Lincoln explains the problem, although doesn't answer it. The answer, as in all Big Mysteries, is still unknown and one of the leading research topics of contemporary science.« less

  12. A Great Year for the Big Blue Water

    NASA Astrophysics Data System (ADS)

    Leinen, M.

    2016-12-01

    It has been a great year for the big blue water. Last year the 'United_Nations' decided that it would focus on long time remain alright for the big blue water as one of its 'Millenium_Development_Goals'. This is new. In the past the big blue water was never even considered as a part of this world long time remain alright push. Also, last year the big blue water was added to the words of the group of world people paper #21 on cooling the air and things. It is hard to believe that the big blue water was not in the paper before because 70% of the world is covered by the big blue water! Many people at the group of world meeting were from our friends at 'AGU'.

  13. Real-Time Information Extraction from Big Data

    DTIC Science & Technology

    2015-10-01

    I N S T I T U T E F O R D E F E N S E A N A L Y S E S Real-Time Information Extraction from Big Data Robert M. Rolfe...Information Extraction from Big Data Jagdeep Shah Robert M. Rolfe Francisco L. Loaiza-Lemos October 7, 2015 I N S T I T U T E F O R D E F E N S E...AN A LY S E S Abstract We are drowning under the 3 Vs (volume, velocity and variety) of big data . Real-time information extraction from big

  14. Big data and biomedical informatics: a challenging opportunity.

    PubMed

    Bellazzi, R

    2014-05-22

    Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations.

  15. Think Big, Bigger ... and Smaller

    ERIC Educational Resources Information Center

    Nisbett, Richard E.

    2010-01-01

    One important principle of social psychology, writes Nisbett, is that some big-seeming interventions have little or no effect. This article discusses a number of cases from the field of education that confirm this principle. For example, Head Start seems like a big intervention, but research has indicated that its effects on academic achievement…

  16. Personality and job performance: the Big Five revisited.

    PubMed

    Hurtz, G M; Donovan, J J

    2000-12-01

    Prior meta-analyses investigating the relation between the Big 5 personality dimensions and job performance have all contained a threat to construct validity, in that much of the data included within these analyses was not derived from actual Big 5 measures. In addition, these reviews did not address the relations between the Big 5 and contextual performance. Therefore, the present study sought to provide a meta-analytic estimate of the criterion-related validity of explicit Big 5 measures for predicting job performance and contextual performance. The results for job performance closely paralleled 2 of the previous meta-analyses, whereas analyses with contextual performance showed more complex relations among the Big 5 and performance. A more critical interpretation of the Big 5-performance relationship is presented, and suggestions for future research aimed at enhancing the validity of personality predictors are provided.

  17. Adding Big Data Analytics to GCSS-MC

    DTIC Science & Technology

    2014-09-30

    TERMS Big Data , Hadoop , MapReduce, GCSS-MC 15. NUMBER OF PAGES 93 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY...10 2.5 Hadoop . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3 The Experiment Design 23 3.1 Why Add a Big Data Element...23 3.2 Adding a Big Data Element to GCSS-MC . . . . . . . . . . . . . . 24 3.3 Building a Hadoop Cluster

  18. Ethics and Epistemology of Big Data.

    PubMed

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian

    2017-12-01

    In this Symposium on the Ethics and Epistemology of Big Data, we present four perspectives on the ways in which the rapid growth in size of research databanks-i.e. their shift into the realm of "big data"-has changed their moral, socio-political, and epistemic status. While there is clearly something different about "big data" databanks, we encourage readers to place the arguments presented in this Symposium in the context of longstanding debates about the ethics, politics, and epistemology of biobank, database, genetic, and epidemiological research.

  19. The challenges of big data.

    PubMed

    Mardis, Elaine R

    2016-05-01

    The largely untapped potential of big data analytics is a feeding frenzy that has been fueled by the production of many next-generation-sequencing-based data sets that are seeking to answer long-held questions about the biology of human diseases. Although these approaches are likely to be a powerful means of revealing new biological insights, there are a number of substantial challenges that currently hamper efforts to harness the power of big data. This Editorial outlines several such challenges as a means of illustrating that the path to big data revelations is paved with perils that the scientific community must overcome to pursue this important quest. © 2016. Published by The Company of Biologists Ltd.

  20. Big³. Editorial.

    PubMed

    Lehmann, C U; Séroussi, B; Jaulent, M-C

    2014-05-22

    To provide an editorial introduction into the 2014 IMIA Yearbook of Medical Informatics with an overview of the content, the new publishing scheme, and upcoming 25th anniversary. A brief overview of the 2014 special topic, Big Data - Smart Health Strategies, and an outline of the novel publishing model is provided in conjunction with a call for proposals to celebrate the 25th anniversary of the Yearbook. 'Big Data' has become the latest buzzword in informatics and promise new approaches and interventions that can improve health, well-being, and quality of life. This edition of the Yearbook acknowledges the fact that we just started to explore the opportunities that 'Big Data' will bring. However, it will become apparent to the reader that its pervasive nature has invaded all aspects of biomedical informatics - some to a higher degree than others. It was our goal to provide a comprehensive view at the state of 'Big Data' today, explore its strengths and weaknesses, as well as its risks, discuss emerging trends, tools, and applications, and stimulate the development of the field through the aggregation of excellent survey papers and working group contributions to the topic. For the first time in history will the IMIA Yearbook be published in an open access online format allowing a broader readership especially in resource poor countries. For the first time, thanks to the online format, will the IMIA Yearbook be published twice in the year, with two different tracks of papers. We anticipate that the important role of the IMIA yearbook will further increase with these changes just in time for its 25th anniversary in 2016.

  1. The Big Read: Case Studies

    ERIC Educational Resources Information Center

    National Endowment for the Arts, 2009

    2009-01-01

    The Big Read evaluation included a series of 35 case studies designed to gather more in-depth information on the program's implementation and impact. The case studies gave readers a valuable first-hand look at The Big Read in context. Both formal and informal interviews, focus groups, attendance at a wide range of events--all showed how…

  2. Seed bank and big sagebrush plant community composition in a range margin for big sagebrush

    USGS Publications Warehouse

    Martyn, Trace E.; Bradford, John B.; Schlaepfer, Daniel R.; Burke, Ingrid C.; Laurenroth, William K.

    2016-01-01

    The potential influence of seed bank composition on range shifts of species due to climate change is unclear. Seed banks can provide a means of both species persistence in an area and local range expansion in the case of increasing habitat suitability, as may occur under future climate change. However, a mismatch between the seed bank and the established plant community may represent an obstacle to persistence and expansion. In big sagebrush (Artemisia tridentata) plant communities in Montana, USA, we compared the seed bank to the established plant community. There was less than a 20% similarity in the relative abundance of species between the established plant community and the seed bank. This difference was primarily driven by an overrepresentation of native annual forbs and an underrepresentation of big sagebrush in the seed bank compared to the established plant community. Even though we expect an increase in habitat suitability for big sagebrush under future climate conditions at our sites, the current mismatch between the plant community and the seed bank could impede big sagebrush range expansion into increasingly suitable habitat in the future.

  3. Application and Prospect of Big Data in Water Resources

    NASA Astrophysics Data System (ADS)

    Xi, Danchi; Xu, Xinyi

    2017-04-01

    Because of developed information technology and affordable data storage, we h ave entered the era of data explosion. The term "Big Data" and technology relate s to it has been created and commonly applied in many fields. However, academic studies just got attention on Big Data application in water resources recently. As a result, water resource Big Data technology has not been fully developed. This paper introduces the concept of Big Data and its key technologies, including the Hadoop system and MapReduce. In addition, this paper focuses on the significance of applying the big data in water resources and summarizing prior researches by others. Most studies in this field only set up theoretical frame, but we define the "Water Big Data" and explain its tridimensional properties which are time dimension, spatial dimension and intelligent dimension. Based on HBase, the classification system of Water Big Data is introduced: hydrology data, ecology data and socio-economic data. Then after analyzing the challenges in water resources management, a series of solutions using Big Data technologies such as data mining and web crawler, are proposed. Finally, the prospect of applying big data in water resources is discussed, it can be predicted that as Big Data technology keeps developing, "3D" (Data Driven Decision) will be utilized more in water resources management in the future.

  4. Toward a Literature-Driven Definition of Big Data in Healthcare.

    PubMed

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    The aim of this study was to provide a definition of big data in healthcare. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. A total of 196 papers were included. Big data can be defined as datasets with Log(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data.

  5. Big Data Analytic, Big Step for Patient Management and Care in Puerto Rico.

    PubMed

    Borrero, Ernesto E

    2018-01-01

    This letter provides an overview of the application of big data in health care system to improve quality of care, including predictive modelling for risk and resource use, precision medicine and clinical decision support, quality of care and performance measurement, public health and research applications, among others. The author delineates the tremendous potential for big data analytics and discuss how it can be successfully implemented in clinical practice, as an important component of a learning health-care system.

  6. Astrophysical S-factor of the 32He(α,γ) 733 7Be reaction in the Big-Bang nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Ghamary, Motahareh; Sadeghi, Hossein; Mohammadi, Saeed

    2018-05-01

    In the present work, we have studied the properties of the 23He(α , γ) 47Be reaction. The direct radiative capture nuclear reactions in the Big-Bang nucleosynthesis mainly, are done in the external areas of inter-nuclear interaction range and play an essential role in nuclear astrophysics. Among of these reactions, the 23He(α , γ) 47Be reaction with Q = 1.586 MeV is the main part of the Big-Bang nucleosynthesis chain reactions. This reaction can be used to understand the physical and chemical properties of the sun as well as can be justified the lake of the observed solar neutrino in the detector of the Earth. Since product neutrino fluxes are predicated in the center of the sun by the decay of 7Be and 8B, and almost are proportional to the astrophysical S-factor for the 23He(α , γ) 47Be reaction, S34. The 23He(α , γ) 47Be reaction is considered the key to solve the solar neutrino puzzle. Finally, we have astrophysical S-factor obtained for the ground S1,3/2-, first excited S1,1/2-and total S34 states by modern nucleon-nucleon two-body local potential models. We have also compared the obtained S-factor with experimental data and other theoretical works.

  7. Big Data and Biomedical Informatics: A Challenging Opportunity

    PubMed Central

    2014-01-01

    Summary Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations. PMID:24853034

  8. Integrating the Apache Big Data Stack with HPC for Big Data

    NASA Astrophysics Data System (ADS)

    Fox, G. C.; Qiu, J.; Jha, S.

    2014-12-01

    There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However, the same is not so true for data intensive computing, even though commercially clouds devote much more resources to data analytics than supercomputers devote to simulations. We look at a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures. We suggest a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks and use these to identify a few key classes of hardware/software architectures. Our analysis builds on combining HPC and ABDS the Apache big data software stack that is well used in modern cloud computing. Initial results on clouds and HPC systems are encouraging. We propose the development of SPIDAL - Scalable Parallel Interoperable Data Analytics Library -- built on system aand data abstractions suggested by the HPC-ABDS architecture. We discuss how it can be used in several application areas including Polar Science.

  9. Issues in Big-Data Database Systems

    DTIC Science & Technology

    2014-06-01

    Post, 18 August 2013. Berman, Jules K. (2013). Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information. New York: Elsevier... Jules K. (2013). Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information. New York: Elsevier. 261pp. Characterization of

  10. Development and Operation of an Automatic Rotor Trim Control System for the UH-60 Individual Blade Control Wind Tunnel Test

    NASA Technical Reports Server (NTRS)

    Theodore, Colin R.; Tischler, Mark B.

    2010-01-01

    An automatic rotor trim control system was developed and successfully used during a wind tunnel test of a full-scale UH-60 rotor system with Individual Blade Control (IBC) actuators. The trim control system allowed rotor trim to be set more quickly, precisely and repeatably than in previous wind tunnel tests. This control system also allowed the rotor trim state to be maintained during transients and drift in wind tunnel flow, and through changes in IBC actuation. The ability to maintain a consistent rotor trim state was key to quickly and accurately evaluating the effect of IBC on rotor performance, vibration, noise and loads. This paper presents details of the design and implementation of the trim control system including the rotor system hardware, trim control requirements, and trim control hardware and software implementation. Results are presented showing the effect of IBC on rotor trim and dynamic response, a validation of the rotor dynamic simulation used to calculate the initial control gains and tuning of the control system, and the overall performance of the trim control system during the wind tunnel test.

  11. WE-H-BRB-00: Big Data in Radiation Oncology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at themore » NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.« less

  12. Evaluating the potential of the MegaSIMS for nuclear forensics

    NASA Astrophysics Data System (ADS)

    Boehnke, P.; McKeegan, K. D.; Coath, C. D.; Hutcheon, I. D.; Steele, R. C.; Harrison, M.

    2013-12-01

    Nuclear forensics investigates the illicit movement of nuclear materials. Measurements of uranium isotopic compositions are an important key as they permit provenance tracing and determination of intended use. Traditional secondary ion mass spectrometers (SIMS) are incapable of resolving 235UH from 236U due to the high mass resolving power (MRP ~38,000) needed, significantly limiting their ability to accurately measure 236U/235U, particularly for highly enriched uranium. This limitation can significantly inhibit the ability to establish details about enrichment processes. The MegaSIMS is a unique combination of SIMS and accelerator mass spectrometry (AMS) and allows for molecular interference free measurements, while retaining the spatial resolution and ease of sample preparation common in SIMS analyses. The instrument was primarily designed to measure the oxygen isotope composition of the solar wind [1] and its capability for measuring high mass elements has not been evaluated previously. We evaluated the potential of the MegaSIMS by measuring 236U/235U without hydride interference. While preliminary results show abundance sensitivity of ~E-9 and an MRP of ~1,200 at the high mass side of 238 amu, precision is limited by the detector geometry and slow magnet switching. Future work will include developing electrostatic peak switching as well as refining the measurement precision and abundance sensitivity of the MegaSIMS for nuclear forensics. [1] McKeegan, Kallio, Heber, Jarzebinski, Mao, Coath, Kunihiro, Wiens, Nordholt, Moses Jr., Reisenfeld, Jurewicz, and Burnett, 2011. Science. 332, 1528-1532.

  13. Epidemiology in the Era of Big Data

    PubMed Central

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-01-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called ‘3 Vs’: variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that, while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field’s future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future. PMID:25756221

  14. Toward a Literature-Driven Definition of Big Data in Healthcare

    PubMed Central

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data. PMID:26137488

  15. Big-Eyed Bugs Have Big Appetite for Pests

    USDA-ARS?s Scientific Manuscript database

    Many kinds of arthropod natural enemies (predators and parasitoids) inhabit crop fields in Arizona and can have a large negative impact on several pest insect species that also infest these crops. Geocoris spp., commonly known as big-eyed bugs, are among the most abundant insect predators in field c...

  16. Nuclear reactions from lattice QCD

    DOE PAGES

    Briceño, Raúl A.; Davoudi, Zohreh; Luu, Thomas C.

    2015-01-13

    In this study, one of the overarching goals of nuclear physics is to rigorously compute properties of hadronic systems directly from the fundamental theory of strong interactions, Quantum Chromodynamics (QCD). In particular, the hope is to perform reliable calculations of nuclear reactions which will impact our understanding of environments that occur during big bang nucleosynthesis, the evolution of stars and supernovae, and within nuclear reactors and high energy/density facilities. Such calculations, being truly ab initio, would include all two-nucleon and three- nucleon (and higher) interactions in a consistent manner. Currently, lattice QCD provides the only reliable option for performing calculationsmore » of some of the low-energy hadronic observables. With the aim of bridging the gap between lattice QCD and nuclear many-body physics, the Institute for Nuclear Theory held a workshop on Nuclear Reactions from Lattice QCD on March 2013. In this review article, we report on the topics discussed in this workshop and the path planned to move forward in the upcoming years.« less

  17. Big Data - What is it and why it matters.

    PubMed

    Tattersall, Andy; Grant, Maria J

    2016-06-01

    Big data, like MOOCs, altmetrics and open access, is a term that has been commonplace in the library community for some time yet, despite its prevalence, many in the library and information sector remain unsure of the relationship between big data and their roles. This editorial explores what big data could mean for the day-to-day practice of health library and information workers, presenting examples of big data in action, considering the ethics of accessing big data sets and the potential for new roles for library and information workers. © 2016 Health Libraries Group.

  18. Research on information security in big data era

    NASA Astrophysics Data System (ADS)

    Zhou, Linqi; Gu, Weihong; Huang, Cheng; Huang, Aijun; Bai, Yongbin

    2018-05-01

    Big data is becoming another hotspot in the field of information technology after the cloud computing and the Internet of Things. However, the existing information security methods can no longer meet the information security requirements in the era of big data. This paper analyzes the challenges and a cause of data security brought by big data, discusses the development trend of network attacks under the background of big data, and puts forward my own opinions on the development of security defense in technology, strategy and product.

  19. ["Big data" - large data, a lot of knowledge?].

    PubMed

    Hothorn, Torsten

    2015-01-28

    Since a couple of years, the term Big Data describes technologies to extract knowledge from data. Applications of Big Data and their consequences are also increasingly discussed in the mass media. Because medicine is an empirical science, we discuss the meaning of Big Data and its potential for future medical research.

  20. Big Ideas in Primary Mathematics: Issues and Directions

    ERIC Educational Resources Information Center

    Askew, Mike

    2013-01-01

    This article is located within the literature arguing for attention to Big Ideas in teaching and learning mathematics for understanding. The focus is on surveying the literature of Big Ideas and clarifying what might constitute Big Ideas in the primary Mathematics Curriculum based on both theoretical and pragmatic considerations. This is…

  1. Big Data - Smart Health Strategies

    PubMed Central

    2014-01-01

    Summary Objectives To select best papers published in 2013 in the field of big data and smart health strategies, and summarize outstanding research efforts. Methods A systematic search was performed using two major bibliographic databases for relevant journal papers. The references obtained were reviewed in a two-stage process, starting with a blinded review performed by the two section editors, and followed by a peer review process operated by external reviewers recognized as experts in the field. Results The complete review process selected four best papers, illustrating various aspects of the special theme, among them: (a) using large volumes of unstructured data and, specifically, clinical notes from Electronic Health Records (EHRs) for pharmacovigilance; (b) knowledge discovery via querying large volumes of complex (both structured and unstructured) biological data using big data technologies and relevant tools; (c) methodologies for applying cloud computing and big data technologies in the field of genomics, and (d) system architectures enabling high-performance access to and processing of large datasets extracted from EHRs. Conclusions The potential of big data in biomedicine has been pinpointed in various viewpoint papers and editorials. The review of current scientific literature illustrated a variety of interesting methods and applications in the field, but still the promises exceed the current outcomes. As we are getting closer towards a solid foundation with respect to common understanding of relevant concepts and technical aspects, and the use of standardized technologies and tools, we can anticipate to reach the potential that big data offer for personalized medicine and smart health strategies in the near future. PMID:25123721

  2. Big Data Management in US Hospitals: Benefits and Barriers.

    PubMed

    Schaeffer, Chad; Booton, Lawrence; Halleck, Jamey; Studeny, Jana; Coustasse, Alberto

    Big data has been considered as an effective tool for reducing health care costs by eliminating adverse events and reducing readmissions to hospitals. The purposes of this study were to examine the emergence of big data in the US health care industry, to evaluate a hospital's ability to effectively use complex information, and to predict the potential benefits that hospitals might realize if they are successful in using big data. The findings of the research suggest that there were a number of benefits expected by hospitals when using big data analytics, including cost savings and business intelligence. By using big data, many hospitals have recognized that there have been challenges, including lack of experience and cost of developing the analytics. Many hospitals will need to invest in the acquiring of adequate personnel with experience in big data analytics and data integration. The findings of this study suggest that the adoption, implementation, and utilization of big data technology will have a profound positive effect among health care providers.

  3. Big Data in Caenorhabditis elegans: quo vadis?

    PubMed Central

    Hutter, Harald; Moerman, Donald

    2015-01-01

    A clear definition of what constitutes “Big Data” is difficult to identify, but we find it most useful to define Big Data as a data collection that is complete. By this criterion, researchers on Caenorhabditis elegans have a long history of collecting Big Data, since the organism was selected with the idea of obtaining a complete biological description and understanding of development. The complete wiring diagram of the nervous system, the complete cell lineage, and the complete genome sequence provide a framework to phrase and test hypotheses. Given this history, it might be surprising that the number of “complete” data sets for this organism is actually rather small—not because of lack of effort, but because most types of biological experiments are not currently amenable to complete large-scale data collection. Many are also not inherently limited, so that it becomes difficult to even define completeness. At present, we only have partial data on mutated genes and their phenotypes, gene expression, and protein–protein interaction—important data for many biological questions. Big Data can point toward unexpected correlations, and these unexpected correlations can lead to novel investigations; however, Big Data cannot establish causation. As a result, there is much excitement about Big Data, but there is also a discussion on just what Big Data contributes to solving a biological problem. Because of its relative simplicity, C. elegans is an ideal test bed to explore this issue and at the same time determine what is necessary to build a multicellular organism from a single cell. PMID:26543198

  4. [Relevance of big data for molecular diagnostics].

    PubMed

    Bonin-Andresen, M; Smiljanovic, B; Stuhlmüller, B; Sörensen, T; Grützkau, A; Häupl, T

    2018-04-01

    Big data analysis raises the expectation that computerized algorithms may extract new knowledge from otherwise unmanageable vast data sets. What are the algorithms behind the big data discussion? In principle, high throughput technologies in molecular research already introduced big data and the development and application of analysis tools into the field of rheumatology some 15 years ago. This includes especially omics technologies, such as genomics, transcriptomics and cytomics. Some basic methods of data analysis are provided along with the technology, however, functional analysis and interpretation requires adaptation of existing or development of new software tools. For these steps, structuring and evaluating according to the biological context is extremely important and not only a mathematical problem. This aspect has to be considered much more for molecular big data than for those analyzed in health economy or epidemiology. Molecular data are structured in a first order determined by the applied technology and present quantitative characteristics that follow the principles of their biological nature. These biological dependencies have to be integrated into software solutions, which may require networks of molecular big data of the same or even different technologies in order to achieve cross-technology confirmation. More and more extensive recording of molecular processes also in individual patients are generating personal big data and require new strategies for management in order to develop data-driven individualized interpretation concepts. With this perspective in mind, translation of information derived from molecular big data will also require new specifications for education and professional competence.

  5. Big data in psychology: A framework for research advancement.

    PubMed

    Adjerid, Idris; Kelley, Ken

    2018-02-22

    The potential for big data to provide value for psychology is significant. However, the pursuit of big data remains an uncertain and risky undertaking for the average psychological researcher. In this article, we address some of this uncertainty by discussing the potential impact of big data on the type of data available for psychological research, addressing the benefits and most significant challenges that emerge from these data, and organizing a variety of research opportunities for psychology. Our article yields two central insights. First, we highlight that big data research efforts are more readily accessible than many researchers realize, particularly with the emergence of open-source research tools, digital platforms, and instrumentation. Second, we argue that opportunities for big data research are diverse and differ both in their fit for varying research goals, as well as in the challenges they bring about. Ultimately, our outlook for researchers in psychology using and benefiting from big data is cautiously optimistic. Although not all big data efforts are suited for all researchers or all areas within psychology, big data research prospects are diverse, expanding, and promising for psychology and related disciplines. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. 'Big data' in pharmaceutical science: challenges and opportunities.

    PubMed

    Dossetter, Al G; Ecker, Gerhard; Laverty, Hugh; Overington, John

    2014-05-01

    Future Medicinal Chemistry invited a selection of experts to express their views on the current impact of big data in drug discovery and design, as well as speculate on future developments in the field. The topics discussed include the challenges of implementing big data technologies, maintaining the quality and privacy of data sets, and how the industry will need to adapt to welcome the big data era. Their enlightening responses provide a snapshot of the many and varied contributions being made by big data to the advancement of pharmaceutical science.

  7. Sports and the Big6: The Information Advantage.

    ERIC Educational Resources Information Center

    Eisenberg, Mike

    1997-01-01

    Explores the connection between sports and the Big6 information problem-solving process and how sports provides an ideal setting for learning and teaching about the Big6. Topics include information aspects of baseball, football, soccer, basketball, figure skating, track and field, and golf; and the Big6 process applied to sports. (LRW)

  8. Current applications of big data in obstetric anesthesiology.

    PubMed

    Klumpner, Thomas T; Bauer, Melissa E; Kheterpal, Sachin

    2017-06-01

    The narrative review aims to highlight several recently published 'big data' studies pertinent to the field of obstetric anesthesiology. Big data has been used to study rare outcomes, to identify trends within the healthcare system, to identify variations in practice patterns, and to highlight potential inequalities in obstetric anesthesia care. Big data studies have helped define the risk of rare complications of obstetric anesthesia, such as the risk of neuraxial hematoma in thrombocytopenic parturients. Also, large national databases have been used to better understand trends in anesthesia-related adverse events during cesarean delivery as well as outline potential racial/ethnic disparities in obstetric anesthesia care. Finally, real-time analysis of patient data across a number of disparate health information systems through the use of sophisticated clinical decision support and surveillance systems is one promising application of big data technology on the labor and delivery unit. 'Big data' research has important implications for obstetric anesthesia care and warrants continued study. Real-time electronic surveillance is a potentially useful application of big data technology on the labor and delivery unit.

  9. [Big data and their perspectives in radiation therapy].

    PubMed

    Guihard, Sébastien; Thariat, Juliette; Clavier, Jean-Baptiste

    2017-02-01

    The concept of big data indicates a change of scale in the use of data and data aggregation into large databases through improved computer technology. One of the current challenges in the creation of big data in the context of radiation therapy is the transformation of routine care items into dark data, i.e. data not yet collected, and the fusion of databases collecting different types of information (dose-volume histograms and toxicity data for example). Processes and infrastructures devoted to big data collection should not impact negatively on the doctor-patient relationship, the general process of care or the quality of the data collected. The use of big data requires a collective effort of physicians, physicists, software manufacturers and health authorities to create, organize and exploit big data in radiotherapy and, beyond, oncology. Big data involve a new culture to build an appropriate infrastructure legally and ethically. Processes and issues are discussed in this article. Copyright © 2016 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  10. Volume and Value of Big Healthcare Data.

    PubMed

    Dinov, Ivo D

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions.

  11. Volume and Value of Big Healthcare Data

    PubMed Central

    Dinov, Ivo D.

    2016-01-01

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions. PMID:26998309

  12. Simulation Experiments: Better Data, Not Just Big Data

    DTIC Science & Technology

    2014-12-01

    Modeling and Computer Simulation 22 (4): 20:1–20:17. Hogan, Joe 2014, June 9. “So Far, Big Data is Small Potatoes ”. Scientific American Blog Network...Available via http://blogs.scientificamerican.com/cross-check/2014/06/09/so-far- big-data-is-small- potatoes /. IBM. 2014. “Big Data at the Speed of Business

  13. Big Data Analytics Methodology in the Financial Industry

    ERIC Educational Resources Information Center

    Lawler, James; Joseph, Anthony

    2017-01-01

    Firms in industry continue to be attracted by the benefits of Big Data Analytics. The benefits of Big Data Analytics projects may not be as evident as frequently indicated in the literature. The authors of the study evaluate factors in a customized methodology that may increase the benefits of Big Data Analytics projects. Evaluating firms in the…

  14. International Aerospace and Ground Conference on Lightning and Static Electricity. 1984 technical papers. Supplement

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The indirect effects of lightning on digital systems, ground system protection, and the corrosion properties of conductive materials are addressed. The responses of a UH-60A helicopter and tactical shelters to lightning and nuclear electromagnetic pulses are discussed.

  15. Big data: survey, technologies, opportunities, and challenges.

    PubMed

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Ali, Waleed Kamaleldin Mahmoud; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  16. Big Data: Survey, Technologies, Opportunities, and Challenges

    PubMed Central

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Mahmoud Ali, Waleed Kamaleldin; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data. PMID:25136682

  17. Opportunity and Challenges for Migrating Big Data Analytics in Cloud

    NASA Astrophysics Data System (ADS)

    Amitkumar Manekar, S.; Pradeepini, G., Dr.

    2017-08-01

    Big Data Analytics is a big word now days. As per demanding and more scalable process data generation capabilities, data acquisition and storage become a crucial issue. Cloud storage is a majorly usable platform; the technology will become crucial to executives handling data powered by analytics. Now a day’s trend towards “big data-as-a-service” is talked everywhere. On one hand, cloud-based big data analytics exactly tackle in progress issues of scale, speed, and cost. But researchers working to solve security and other real-time problem of big data migration on cloud based platform. This article specially focused on finding possible ways to migrate big data to cloud. Technology which support coherent data migration and possibility of doing big data analytics on cloud platform is demanding in natute for new era of growth. This article also gives information about available technology and techniques for migration of big data in cloud.

  18. Curating Big Data Made Simple: Perspectives from Scientific Communities.

    PubMed

    Sowe, Sulayman K; Zettsu, Koji

    2014-03-01

    The digital universe is exponentially producing an unprecedented volume of data that has brought benefits as well as fundamental challenges for enterprises and scientific communities alike. This trend is inherently exciting for the development and deployment of cloud platforms to support scientific communities curating big data. The excitement stems from the fact that scientists can now access and extract value from the big data corpus, establish relationships between bits and pieces of information from many types of data, and collaborate with a diverse community of researchers from various domains. However, despite these perceived benefits, to date, little attention is focused on the people or communities who are both beneficiaries and, at the same time, producers of big data. The technical challenges posed by big data are as big as understanding the dynamics of communities working with big data, whether scientific or otherwise. Furthermore, the big data era also means that big data platforms for data-intensive research must be designed in such a way that research scientists can easily search and find data for their research, upload and download datasets for onsite/offsite use, perform computations and analysis, share their findings and research experience, and seamlessly collaborate with their colleagues. In this article, we present the architecture and design of a cloud platform that meets some of these requirements, and a big data curation model that describes how a community of earth and environmental scientists is using the platform to curate data. Motivation for developing the platform, lessons learnt in overcoming some challenges associated with supporting scientists to curate big data, and future research directions are also presented.

  19. Big data analytics in healthcare: promise and potential.

    PubMed

    Raghupathi, Wullianallur; Raghupathi, Viju

    2014-01-01

    To describe the promise and potential of big data analytics in healthcare. The paper describes the nascent field of big data analytics in healthcare, discusses the benefits, outlines an architectural framework and methodology, describes examples reported in the literature, briefly discusses the challenges, and offers conclusions. The paper provides a broad overview of big data analytics for healthcare researchers and practitioners. Big data analytics in healthcare is evolving into a promising field for providing insight from very large data sets and improving outcomes while reducing costs. Its potential is great; however there remain challenges to overcome.

  20. Big data are coming to psychiatry: a general introduction.

    PubMed

    Monteith, Scott; Glenn, Tasha; Geddes, John; Bauer, Michael

    2015-12-01

    Big data are coming to the study of bipolar disorder and all of psychiatry. Data are coming from providers and payers (including EMR, imaging, insurance claims and pharmacy data), from omics (genomic, proteomic, and metabolomic data), and from patients and non-providers (data from smart phone and Internet activities, sensors and monitoring tools). Analysis of the big data will provide unprecedented opportunities for exploration, descriptive observation, hypothesis generation, and prediction, and the results of big data studies will be incorporated into clinical practice. Technical challenges remain in the quality, analysis and management of big data. This paper discusses some of the fundamental opportunities and challenges of big data for psychiatry.

  1. True Randomness from Big Data.

    PubMed

    Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang

    2016-09-26

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  2. True Randomness from Big Data

    NASA Astrophysics Data System (ADS)

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-09-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  3. True Randomness from Big Data

    PubMed Central

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-01-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514

  4. Big Data, Big Problems: Incorporating Mission, Values, and Culture in Provider Affiliations.

    PubMed

    Shaha, Steven H; Sayeed, Zain; Anoushiravani, Afshin A; El-Othmani, Mouhanad M; Saleh, Khaled J

    2016-10-01

    This article explores how integration of data from clinical registries and electronic health records produces a quality impact within orthopedic practices. Data are differentiated from information, and several types of data that are collected and used in orthopedic outcome measurement are defined. Furthermore, the concept of comparative effectiveness and its impact on orthopedic clinical research are assessed. This article places emphasis on how the concept of big data produces health care challenges balanced with benefits that may be faced by patients and orthopedic surgeons. Finally, essential characteristics of an electronic health record that interlinks musculoskeletal care and big data initiatives are reviewed. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. AirMSPI PODEX BigSur Terrain Images

    Atmospheric Science Data Center

    2013-12-13

    ... Browse Images from the PODEX 2013 Campaign   Big Sur target (Big Sur, California) 02/03/2013 Terrain-projected   Select ...   Version number   For more information, see the Data Product Specifications (DPS)   ...

  6. A New Look at Big History

    ERIC Educational Resources Information Center

    Hawkey, Kate

    2014-01-01

    The article sets out a "big history" which resonates with the priorities of our own time. A globalizing world calls for new spacial scales to underpin what the history curriculum addresses, "big history" calls for new temporal scales, while concern over climate change calls for a new look at subject boundaries. The article…

  7. West Virginia's big trees: setting the record straight

    Treesearch

    Melissa Thomas-Van Gundy; Robert Whetsell

    2016-01-01

    People love big trees, people love to find big trees, and people love to find big trees in the place they call home. Having been suspicious for years, my coauthor and historian Rob Whetsell, approached me with a species identification challenge. There are several photographs of giant trees used by many people to illustrate the past forests of West Virginia,...

  8. Measurements of Tip Vortices from a Full-Scale UH-60A Rotor by Retro- Reflective Background Oriented Schlieren and Stereo Photogrammetry

    NASA Technical Reports Server (NTRS)

    Schairer, Edward; Kushner, Laura K.; Heineck, James T.

    2013-01-01

    Positions of vortices shed by a full-scale UH-60A rotor in forward flight were measured during a test in the National Full- Scale Aerodynamics Complex at NASA Ames Research Center. Vortices in a region near the tip of the advancing blade were visualized from two directions by Retro-Reflective Background-Oriented Schlieren (RBOS). Correspondence of points on the vortex in the RBOS images from both cameras was established using epipolar geometry. The object-space coordinates of the vortices were then calculated from the image-plane coordinates using stereo photogrammetry. One vortex from the tip of the blade that had most recently passed was visible in most of the data. The visibility of the vortices was greatest at high thrust and low advance ratios. At these favorable conditions, vortices from the most recent passages of all four blades were detected. The vortex positions were in good agreement with PIV data for a case where PIV measurements were also made. RBOS and photogrammetry provided measurements of the angle at which each vortex passed through the PIV plane.

  9. 77 FR 49779 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-17

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... Big Horn County Weed and Pest Building, 4782 Highway 310, Greybull, Wyoming. Written comments about...

  10. 75 FR 71069 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... held at the Big Horn County Weed and Pest Building, 4782 Highway 310, Greybull, Wyoming. Written...

  11. Big Bang Day : The Great Big Particle Adventure - 3. Origins

    ScienceCinema

    None

    2017-12-09

    In this series, comedian and physicist Ben Miller asks the CERN scientists what they hope to find. If the LHC is successful, it will explain the nature of the Universe around us in terms of a few simple ingredients and a few simple rules. But the Universe now was forged in a Big Bang where conditions were very different, and the rules were very different, and those early moments were crucial to determining how things turned out later. At the LHC they can recreate conditions as they were billionths of a second after the Big Bang, before atoms and nuclei existed. They can find out why matter and antimatter didn't mutually annihilate each other to leave behind a Universe of pure, brilliant light. And they can look into the very structure of space and time - the fabric of the Universe

  12. Structuring the Curriculum around Big Ideas

    ERIC Educational Resources Information Center

    Alleman, Janet; Knighton, Barbara; Brophy, Jere

    2010-01-01

    This article provides an inside look at Barbara Knighton's classroom teaching. She uses big ideas to guide her planning and instruction and gives other teachers suggestions for adopting the big idea approach and ways for making the approach easier. This article also represents a "small slice" of a dozen years of collaborative research,…

  13. Toward a manifesto for the 'public understanding of big data'.

    PubMed

    Michael, Mike; Lupton, Deborah

    2016-01-01

    In this article, we sketch a 'manifesto' for the 'public understanding of big data'. On the one hand, this entails such public understanding of science and public engagement with science and technology-tinged questions as follows: How, when and where are people exposed to, or do they engage with, big data? Who are regarded as big data's trustworthy sources, or credible commentators and critics? What are the mechanisms by which big data systems are opened to public scrutiny? On the other hand, big data generate many challenges for public understanding of science and public engagement with science and technology: How do we address publics that are simultaneously the informant, the informed and the information of big data? What counts as understanding of, or engagement with, big data, when big data themselves are multiplying, fluid and recursive? As part of our manifesto, we propose a range of empirical, conceptual and methodological exhortations. We also provide Appendix 1 that outlines three novel methods for addressing some of the issues raised in the article. © The Author(s) 2015.

  14. Big Data and SME financing in China

    NASA Astrophysics Data System (ADS)

    Tian, Z.; Hassan, A. F. S.; Razak, N. H. A.

    2018-05-01

    Big Data is becoming more and more prevalent in recent years, and it attracts lots of attention from various perspectives of the world such as academia, industry, and even government. Big Data can be seen as the next-generation source of power for the economy. Today, Big Data represents a new way to approach information and help all industry and business fields. The Chinese financial market has long been dominated by state-owned banks; however, these banks provide low-efficiency help toward small- and medium-sized enterprises (SMEs) and private businesses. The development of Big Data is changing the financial market, with more and more financial products and services provided by Internet companies in China. The credit rating models and borrower identification make online financial services more efficient than conventional banks. These services also challenge the domination of state-owned banks.

  15. An embedding for the big bang

    NASA Technical Reports Server (NTRS)

    Wesson, Paul S.

    1994-01-01

    A cosmological model is given that has good physical properties for the early and late universe but is a hypersurface in a flat five-dimensional manifold. The big bang can therefore be regarded as an effect of a choice of coordinates in a truncated higher-dimensional geometry. Thus the big bang is in some sense a geometrical illusion.

  16. 76 FR 26240 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-06

    ... words Big Horn County RAC in the subject line. Facsimilies may be sent to 307-674-2668. All comments... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee...

  17. Unraveling the effects of sex and dispersal: Ozark big-eared bat (Corynorhinus townsendii ingens) conservation genetics

    USGS Publications Warehouse

    Weyandt, S.E.; Van Den Bussche, Ronald A.; Hamilton, M.J.; Leslie, David M.

    2005-01-01

    The Ozark big-eared bat (Corynorhinus townsendii ingens) is federally listed as endangered and is found in only a small number of caves in eastern Oklahoma and northwestern Arkansas. Previous studies suggested site fidelity of females to maternity caves; however, males are solitary most of the year, and thus specific information on their behavior and roosting patterns is lacking. Population genetic variation often provides the necessary data to make inferences about gene flow or mating behavior within that population. We used 2 types of molecular data: DNA sequences from the mitochondrial D loop and alleles at 5 microsatellite loci. Approximately 5% of the population, 24 males and 39 females (63 individuals), were sampled. No significant differentiation between 5 sites was present in nuclear microsatellite variation, but distribution of variation in maternally inherited markers differed among sites. This suggests limited dispersal of female Ozark big-eared bats and natal philopatry. Areas that experience local extinctions are unlikely to be recolonized by species that show strong site fidelity. These results provide a greater understanding of the population dynamics of Ozark big-eared bats and highlight the importance of cave protection relative to maintaining genetic integrity during recovery activities for this listed species. ?? 2005 American Society of Mammalogists.

  18. Commentary: Epidemiology in the era of big data.

    PubMed

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-05-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called "three V's": variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field's future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future.

  19. Big Data and the Future of Radiology Informatics.

    PubMed

    Kansagra, Akash P; Yu, John-Paul J; Chatterjee, Arindam R; Lenchik, Leon; Chow, Daniel S; Prater, Adam B; Yeh, Jean; Doshi, Ankur M; Hawkins, C Matthew; Heilbrun, Marta E; Smith, Stacy E; Oselkin, Martin; Gupta, Pushpender; Ali, Sayed

    2016-01-01

    Rapid growth in the amount of data that is electronically recorded as part of routine clinical operations has generated great interest in the use of Big Data methodologies to address clinical and research questions. These methods can efficiently analyze and deliver insights from high-volume, high-variety, and high-growth rate datasets generated across the continuum of care, thereby forgoing the time, cost, and effort of more focused and controlled hypothesis-driven research. By virtue of an existing robust information technology infrastructure and years of archived digital data, radiology departments are particularly well positioned to take advantage of emerging Big Data techniques. In this review, we describe four areas in which Big Data is poised to have an immediate impact on radiology practice, research, and operations. In addition, we provide an overview of the Big Data adoption cycle and describe how academic radiology departments can promote Big Data development. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  20. Natural regeneration processes in big sagebrush (Artemisia tridentata)

    USGS Publications Warehouse

    Schlaepfer, Daniel R.; Lauenroth, William K.; Bradford, John B.

    2014-01-01

    Big sagebrush, Artemisia tridentata Nuttall (Asteraceae), is the dominant plant species of large portions of semiarid western North America. However, much of historical big sagebrush vegetation has been removed or modified. Thus, regeneration is recognized as an important component for land management. Limited knowledge about key regeneration processes, however, represents an obstacle to identifying successful management practices and to gaining greater insight into the consequences of increasing disturbance frequency and global change. Therefore, our objective is to synthesize knowledge about natural big sagebrush regeneration. We identified and characterized the controls of big sagebrush seed production, germination, and establishment. The largest knowledge gaps and associated research needs include quiescence and dormancy of embryos and seedlings; variation in seed production and germination percentages; wet-thermal time model of germination; responses to frost events (including freezing/thawing of soils), CO2 concentration, and nutrients in combination with water availability; suitability of microsite vs. site conditions; competitive ability as well as seedling growth responses; and differences among subspecies and ecoregions. Potential impacts of climate change on big sagebrush regeneration could include that temperature increases may not have a large direct influence on regeneration due to the broad temperature optimum for regeneration, whereas indirect effects could include selection for populations with less stringent seed dormancy. Drier conditions will have direct negative effects on germination and seedling survival and could also lead to lighter seeds, which lowers germination success further. The short seed dispersal distance of big sagebrush may limit its tracking of suitable climate; whereas, the low competitive ability of big sagebrush seedlings may limit successful competition with species that track climate. An improved understanding of the

  1. Big Data Provenance: Challenges, State of the Art and Opportunities.

    PubMed

    Wang, Jianwu; Crawl, Daniel; Purawat, Shweta; Nguyen, Mai; Altintas, Ilkay

    2015-01-01

    Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data.

  2. Big Data Provenance: Challenges, State of the Art and Opportunities

    PubMed Central

    Wang, Jianwu; Crawl, Daniel; Purawat, Shweta; Nguyen, Mai; Altintas, Ilkay

    2017-01-01

    Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data. PMID:29399671

  3. 1976 Big Thompson flood, Colorado

    USGS Publications Warehouse

    Jarrett, R. D.; Vandas, S.J.

    2006-01-01

    In the early evening of July 31, 1976, a large stationary thunderstorm released as much as 7.5 inches of rainfall in about an hour (about 12 inches in a few hours) in the upper reaches of the Big Thompson River drainage. This large amount of rainfall in such a short period of time produced a flash flood that caught residents and tourists by surprise. The immense volume of water that churned down the narrow Big Thompson Canyon scoured the river channel and destroyed everything in its path, including 418 homes, 52 businesses, numerous bridges, paved and unpaved roads, power and telephone lines, and many other structures. The tragedy claimed the lives of 144 people. Scores of other people narrowly escaped with their lives. The Big Thompson flood ranks among the deadliest of Colorado's recorded floods. It is one of several destructive floods in the United States that has shown the necessity of conducting research to determine the causes and effects of floods. The U.S. Geological Survey (USGS) conducts research and operates a Nationwide streamgage network to help understand and predict the magnitude and likelihood of large streamflow events such as the Big Thompson Flood. Such research and streamgage information are part of an ongoing USGS effort to reduce flood hazards and to increase public awareness.

  4. [Embracing medical innovation in the era of big data].

    PubMed

    You, Suning

    2015-01-01

    Along with the advent of big data era worldwide, medical field has to place itself in it inevitably. The current article thoroughly introduces the basic knowledge of big data, and points out the coexistence of its advantages and disadvantages. Although the innovations in medical field are struggling, the current medical pattern will be changed fundamentally by big data. The article also shows quick change of relevant analysis in big data era, depicts a good intention of digital medical, and proposes some wise advices to surgeons.

  5. SeqHBase: a big data toolset for family based sequencing data analysis.

    PubMed

    He, Min; Person, Thomas N; Hebbring, Scott J; Heinzen, Ethan; Ye, Zhan; Schrodi, Steven J; McPherson, Elizabeth W; Lin, Simon M; Peissig, Peggy L; Brilliant, Murray H; O'Rawe, Jason; Robison, Reid J; Lyon, Gholson J; Wang, Kai

    2015-04-01

    Whole-genome sequencing (WGS) and whole-exome sequencing (WES) technologies are increasingly used to identify disease-contributing mutations in human genomic studies. It can be a significant challenge to process such data, especially when a large family or cohort is sequenced. Our objective was to develop a big data toolset to efficiently manipulate genome-wide variants, functional annotations and coverage, together with conducting family based sequencing data analysis. Hadoop is a framework for reliable, scalable, distributed processing of large data sets using MapReduce programming models. Based on Hadoop and HBase, we developed SeqHBase, a big data-based toolset for analysing family based sequencing data to detect de novo, inherited homozygous, or compound heterozygous mutations that may contribute to disease manifestations. SeqHBase takes as input BAM files (for coverage at every site), variant call format (VCF) files (for variant calls) and functional annotations (for variant prioritisation). We applied SeqHBase to a 5-member nuclear family and a 10-member 3-generation family with WGS data, as well as a 4-member nuclear family with WES data. Analysis times were almost linearly scalable with number of data nodes. With 20 data nodes, SeqHBase took about 5 secs to analyse WES familial data and approximately 1 min to analyse WGS familial data. These results demonstrate SeqHBase's high efficiency and scalability, which is necessary as WGS and WES are rapidly becoming standard methods to study the genetics of familial disorders. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. Application and Exploration of Big Data Mining in Clinical Medicine.

    PubMed

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-03-20

    To review theories and technologies of big data mining and their application in clinical medicine. Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster-Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Big data mining has the potential to play an important role in clinical medicine.

  7. Big Data in Public Health: Terminology, Machine Learning, and Privacy.

    PubMed

    Mooney, Stephen J; Pejaver, Vikas

    2018-04-01

    The digital world is generating data at a staggering and still increasing rate. While these "big data" have unlocked novel opportunities to understand public health, they hold still greater potential for research and practice. This review explores several key issues that have arisen around big data. First, we propose a taxonomy of sources of big data to clarify terminology and identify threads common across some subtypes of big data. Next, we consider common public health research and practice uses for big data, including surveillance, hypothesis-generating research, and causal inference, while exploring the role that machine learning may play in each use. We then consider the ethical implications of the big data revolution with particular emphasis on maintaining appropriate care for privacy in a world in which technology is rapidly changing social norms regarding the need for (and even the meaning of) privacy. Finally, we make suggestions regarding structuring teams and training to succeed in working with big data in research and practice.

  8. Big data analytics to improve cardiovascular care: promise and challenges.

    PubMed

    Rumsfeld, John S; Joynt, Karen E; Maddox, Thomas M

    2016-06-01

    The potential for big data analytics to improve cardiovascular quality of care and patient outcomes is tremendous. However, the application of big data in health care is at a nascent stage, and the evidence to date demonstrating that big data analytics will improve care and outcomes is scant. This Review provides an overview of the data sources and methods that comprise big data analytics, and describes eight areas of application of big data analytics to improve cardiovascular care, including predictive modelling for risk and resource use, population management, drug and medical device safety surveillance, disease and treatment heterogeneity, precision medicine and clinical decision support, quality of care and performance measurement, and public health and research applications. We also delineate the important challenges for big data applications in cardiovascular care, including the need for evidence of effectiveness and safety, the methodological issues such as data quality and validation, and the critical importance of clinical integration and proof of clinical utility. If big data analytics are shown to improve quality of care and patient outcomes, and can be successfully implemented in cardiovascular practice, big data will fulfil its potential as an important component of a learning health-care system.

  9. A proposed framework of big data readiness in public sectors

    NASA Astrophysics Data System (ADS)

    Ali, Raja Haslinda Raja Mohd; Mohamad, Rosli; Sudin, Suhizaz

    2016-08-01

    Growing interest over big data mainly linked to its great potential to unveil unforeseen pattern or profiles that support organisation's key business decisions. Following private sector moves to embrace big data, the government sector has now getting into the bandwagon. Big data has been considered as one of the potential tools to enhance service delivery of the public sector within its financial resources constraints. Malaysian government, particularly, has considered big data as one of the main national agenda. Regardless of government commitment to promote big data amongst government agencies, degrees of readiness of the government agencies as well as their employees are crucial in ensuring successful deployment of big data. This paper, therefore, proposes a conceptual framework to investigate perceived readiness of big data potentials amongst Malaysian government agencies. Perceived readiness of 28 ministries and their respective employees will be assessed using both qualitative (interview) and quantitative (survey) approaches. The outcome of the study is expected to offer meaningful insight on factors affecting change readiness among public agencies on big data potentials and the expected outcome from greater/lower change readiness among the public sectors.

  10. Big Bang Day : The Great Big Particle Adventure - 3. Origins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    In this series, comedian and physicist Ben Miller asks the CERN scientists what they hope to find. If the LHC is successful, it will explain the nature of the Universe around us in terms of a few simple ingredients and a few simple rules. But the Universe now was forged in a Big Bang where conditions were very different, and the rules were very different, and those early moments were crucial to determining how things turned out later. At the LHC they can recreate conditions as they were billionths of a second after the Big Bang, before atoms and nucleimore » existed. They can find out why matter and antimatter didn't mutually annihilate each other to leave behind a Universe of pure, brilliant light. And they can look into the very structure of space and time - the fabric of the Universe« less

  11. 78 FR 33326 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-04

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... will be held July 15, 2013 at 3:00 p.m. ADDRESSES: The meeting will be held at Big Horn County Weed and...

  12. 76 FR 7810 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-11

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... will be held on March 3, 2011, and will begin at 10 a.m. ADDRESSES: The meeting will be held at the Big...

  13. In Search of the Big Bubble

    ERIC Educational Resources Information Center

    Simoson, Andrew; Wentzky, Bethany

    2011-01-01

    Freely rising air bubbles in water sometimes assume the shape of a spherical cap, a shape also known as the "big bubble". Is it possible to find some objective function involving a combination of a bubble's attributes for which the big bubble is the optimal shape? Following the basic idea of the definite integral, we define a bubble's surface as…

  14. Concurrence of big data analytics and healthcare: A systematic review.

    PubMed

    Mehta, Nishita; Pandit, Anil

    2018-06-01

    The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of

  15. Big Data Analytics in Healthcare

    PubMed Central

    Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S. M. Reza; Beard, Daniel A.

    2015-01-01

    The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined. PMID:26229957

  16. Big Data Analytics in Healthcare.

    PubMed

    Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S M Reza; Navidi, Fatemeh; Beard, Daniel A; Najarian, Kayvan

    2015-01-01

    The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.

  17. Mountain big sagebrush (Artemisia tridentata spp vaseyana) seed production

    Treesearch

    Melissa L. Landeen

    2015-01-01

    Big sagebrush (Artemisia tridentata Nutt.) is the most widespread and common shrub in the sagebrush biome of western North America. Of the three most common subspecies of big sagebrush (Artemisia tridentata), mountain big sagebrush (ssp. vaseyana; MBS) is the most resilient to disturbance, but still requires favorable climactic conditions and a viable post-...

  18. New Evidence on the Development of the Word "Big."

    ERIC Educational Resources Information Center

    Sena, Rhonda; Smith, Linda B.

    1990-01-01

    Results indicate that curvilinear trend in children's understanding of word "big" is not obtained in all stimulus contexts. This suggests that meaning and use of "big" is complex, and may not refer simply to larger objects in a set. Proposes that meaning of "big" constitutes a dynamic system driven by many perceptual,…

  19. Investigating Seed Longevity of Big Sagebrush (Artemisia tridentata)

    USGS Publications Warehouse

    Wijayratne, Upekala C.; Pyke, David A.

    2009-01-01

    The Intermountain West is dominated by big sagebrush communities (Artemisia tridentata subspecies) that provide habitat and forage for wildlife, prevent erosion, and are economically important to recreation and livestock industries. The two most prominent subspecies of big sagebrush in this region are Wyoming big sagebrush (A. t. ssp. wyomingensis) and mountain big sagebrush (A. t. ssp. vaseyana). Increased understanding of seed bank dynamics will assist with sustainable management and persistence of sagebrush communities. For example, mountain big sagebrush may be subjected to shorter fire return intervals and prescribed fire is a tool used often to rejuvenate stands and reduce tree (Juniperus sp. or Pinus sp.) encroachment into these communities. A persistent seed bank for mountain big sagebrush would be advantageous under these circumstances. Laboratory germination trials indicate that seed dormancy in big sagebrush may be habitat-specific, with collections from colder sites being more dormant. Our objective was to investigate seed longevity of both subspecies by evaluating viability of seeds in the field with a seed retrieval experiment and sampling for seeds in situ. We chose six study sites for each subspecies. These sites were dispersed across eastern Oregon, southern Idaho, northwestern Utah, and eastern Nevada. Ninety-six polyester mesh bags, each containing 100 seeds of a subspecies, were placed at each site during November 2006. Seed bags were placed in three locations: (1) at the soil surface above litter, (2) on the soil surface beneath litter, and (3) 3 cm below the soil surface to determine whether dormancy is affected by continued darkness or environmental conditions. Subsets of seeds were examined in April and November in both 2007 and 2008 to determine seed viability dynamics. Seed bank samples were taken at each site, separated into litter and soil fractions, and assessed for number of germinable seeds in a greenhouse. Community composition data

  20. Smart Information Management in Health Big Data.

    PubMed

    Muteba A, Eustache

    2017-01-01

    The smart information management system (SIMS) is concerned with the organization of anonymous patient records in a big data and their extraction in order to provide needful real-time intelligence. The purpose of the present study is to highlight the design and the implementation of the smart information management system. We emphasis, in one hand, the organization of a big data in flat file in simulation of nosql database, and in the other hand, the extraction of information based on lookup table and cache mechanism. The SIMS in the health big data aims the identification of new therapies and approaches to delivering care.

  1. Integrative methods for analyzing big data in precision medicine.

    PubMed

    Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša

    2016-03-01

    We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Big Dreams

    ERIC Educational Resources Information Center

    Benson, Michael T.

    2015-01-01

    The Keen Johnson Building is symbolic of Eastern Kentucky University's historic role as a School of Opportunity. It is a place that has inspired generations of students, many from disadvantaged backgrounds, to dream big dreams. The construction of the Keen Johnson Building was inspired by a desire to create a student union facility that would not…

  3. Translating Big Data into Smart Data for Veterinary Epidemiology.

    PubMed

    VanderWaal, Kimberly; Morrison, Robert B; Neuhauser, Claudia; Vilalta, Carles; Perez, Andres M

    2017-01-01

    The increasing availability and complexity of data has led to new opportunities and challenges in veterinary epidemiology around how to translate abundant, diverse, and rapidly growing "big" data into meaningful insights for animal health. Big data analytics are used to understand health risks and minimize the impact of adverse animal health issues through identifying high-risk populations, combining data or processes acting at multiple scales through epidemiological modeling approaches, and harnessing high velocity data to monitor animal health trends and detect emerging health threats. The advent of big data requires the incorporation of new skills into veterinary epidemiology training, including, for example, machine learning and coding, to prepare a new generation of scientists and practitioners to engage with big data. Establishing pipelines to analyze big data in near real-time is the next step for progressing from simply having "big data" to create "smart data," with the objective of improving understanding of health risks, effectiveness of management and policy decisions, and ultimately preventing or at least minimizing the impact of adverse animal health issues.

  4. Machine learning for Big Data analytics in plants.

    PubMed

    Ma, Chuang; Zhang, Hao Helen; Wang, Xiangfeng

    2014-12-01

    Rapid advances in high-throughput genomic technology have enabled biology to enter the era of 'Big Data' (large datasets). The plant science community not only needs to build its own Big-Data-compatible parallel computing and data management infrastructures, but also to seek novel analytical paradigms to extract information from the overwhelming amounts of data. Machine learning offers promising computational and analytical solutions for the integrative analysis of large, heterogeneous and unstructured datasets on the Big-Data scale, and is gradually gaining popularity in biology. This review introduces the basic concepts and procedures of machine-learning applications and envisages how machine learning could interface with Big Data technology to facilitate basic research and biotechnology in the plant sciences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Quality of Big Data in Healthcare

    DOE PAGES

    Sukumar, Sreenivas R.; Ramachandran, Natarajan; Ferrell, Regina Kay

    2015-01-01

    The current trend in Big Data Analytics and in particular Health information technology is towards building sophisticated models, methods and tools for business, operational and clinical intelligence, but the critical issue of data quality required for these models is not getting the attention it deserves. The objective of the paper is to highlight the issues of data quality in the context of Big Data Healthcare Analytics.

  6. Quality of Big Data in Healthcare

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R.; Ramachandran, Natarajan; Ferrell, Regina Kay

    The current trend in Big Data Analytics and in particular Health information technology is towards building sophisticated models, methods and tools for business, operational and clinical intelligence, but the critical issue of data quality required for these models is not getting the attention it deserves. The objective of the paper is to highlight the issues of data quality in the context of Big Data Healthcare Analytics.

  7. Nuclear power: the bargain we can't afford

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, R.

    1977-01-01

    This is a handbook for citizens who wish to raise questions about the costs of atomic energy. It explains, step-by-step, why nuclear reactors have failed to produce low-cost electricity, and it tells citizens how they can use economic arguments to challenge nuclear expansion. Part One, The Costs of Nuclear Energy, contains 7 chapters--The Price of Power (electricity is big business); Mushrooming Capital Costs (nuclear construction costs are skyrocketing); Nuclear Lemons (reactors spend much of their time closed for repairs); The Faulty Fuel Cycle (turning uranium into electricity is not as simple as the utilities say); Hidden Costs (goverment subsidies obscuremore » the true costs of atomic energy); Ratepayer Roulette (nuclear problems translate into higher electric rates); and Alternatives to the Atom (coal-fired power and energy conservation can meet future energy needs more cheaply than nuclear energy). Part Two, Challenging Nuclear Power, contains 3 chapters--Regulators and Reactors (state utility commissions can eliminate the power companies' bias toward nuclear energy); Legislation, Licensing, and Lawsuits (nuclear critics can challenge reactor construction in numerous forums); and Winning the Battle (building an organization is a crucial step in fighting nuclear power). (MCW)« less

  8. Database Resources of the BIG Data Center in 2018.

    PubMed

    2018-01-04

    The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Database Resources of the BIG Data Center in 2018

    PubMed Central

    Xu, Xingjian; Hao, Lili; Zhu, Junwei; Tang, Bixia; Zhou, Qing; Song, Fuhai; Chen, Tingting; Zhang, Sisi; Dong, Lili; Lan, Li; Wang, Yanqing; Sang, Jian; Hao, Lili; Liang, Fang; Cao, Jiabao; Liu, Fang; Liu, Lin; Wang, Fan; Ma, Yingke; Xu, Xingjian; Zhang, Lijuan; Chen, Meili; Tian, Dongmei; Li, Cuiping; Dong, Lili; Du, Zhenglin; Yuan, Na; Zeng, Jingyao; Zhang, Zhewen; Wang, Jinyue; Shi, Shuo; Zhang, Yadong; Pan, Mengyu; Tang, Bixia; Zou, Dong; Song, Shuhui; Sang, Jian; Xia, Lin; Wang, Zhennan; Li, Man; Cao, Jiabao; Niu, Guangyi; Zhang, Yang; Sheng, Xin; Lu, Mingming; Wang, Qi; Xiao, Jingfa; Zou, Dong; Wang, Fan; Hao, Lili; Liang, Fang; Li, Mengwei; Sun, Shixiang; Zou, Dong; Li, Rujiao; Yu, Chunlei; Wang, Guangyu; Sang, Jian; Liu, Lin; Li, Mengwei; Li, Man; Niu, Guangyi; Cao, Jiabao; Sun, Shixiang; Xia, Lin; Yin, Hongyan; Zou, Dong; Xu, Xingjian; Ma, Lina; Chen, Huanxin; Sun, Yubin; Yu, Lei; Zhai, Shuang; Sun, Mingyuan; Zhang, Zhang; Zhao, Wenming; Xiao, Jingfa; Bao, Yiming; Song, Shuhui; Hao, Lili; Li, Rujiao; Ma, Lina; Sang, Jian; Wang, Yanqing; Tang, Bixia; Zou, Dong; Wang, Fan

    2018-01-01

    Abstract The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. PMID:29036542

  10. The BIG Data Center: from deposition to integration to translation

    PubMed Central

    2017-01-01

    Biological data are generated at unprecedentedly exponential rates, posing considerable challenges in big data deposition, integration and translation. The BIG Data Center, established at Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, provides a suite of database resources, including (i) Genome Sequence Archive, a data repository specialized for archiving raw sequence reads, (ii) Gene Expression Nebulas, a data portal of gene expression profiles based entirely on RNA-Seq data, (iii) Genome Variation Map, a comprehensive collection of genome variations for featured species, (iv) Genome Warehouse, a centralized resource housing genome-scale data with particular focus on economically important animals and plants, (v) Methylation Bank, an integrated database of whole-genome single-base resolution methylomes and (vi) Science Wikis, a central access point for biological wikis developed for community annotations. The BIG Data Center is dedicated to constructing and maintaining biological databases through big data integration and value-added curation, conducting basic research to translate big data into big knowledge and providing freely open access to a variety of data resources in support of worldwide research activities in both academia and industry. All of these resources are publicly available and can be found at http://bigd.big.ac.cn. PMID:27899658

  11. The BIG Data Center: from deposition to integration to translation.

    PubMed

    2017-01-04

    Biological data are generated at unprecedentedly exponential rates, posing considerable challenges in big data deposition, integration and translation. The BIG Data Center, established at Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, provides a suite of database resources, including (i) Genome Sequence Archive, a data repository specialized for archiving raw sequence reads, (ii) Gene Expression Nebulas, a data portal of gene expression profiles based entirely on RNA-Seq data, (iii) Genome Variation Map, a comprehensive collection of genome variations for featured species, (iv) Genome Warehouse, a centralized resource housing genome-scale data with particular focus on economically important animals and plants, (v) Methylation Bank, an integrated database of whole-genome single-base resolution methylomes and (vi) Science Wikis, a central access point for biological wikis developed for community annotations. The BIG Data Center is dedicated to constructing and maintaining biological databases through big data integration and value-added curation, conducting basic research to translate big data into big knowledge and providing freely open access to a variety of data resources in support of worldwide research activities in both academia and industry. All of these resources are publicly available and can be found at http://bigd.big.ac.cn. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Application and Exploration of Big Data Mining in Clinical Medicine

    PubMed Central

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-01-01

    Objective: To review theories and technologies of big data mining and their application in clinical medicine. Data Sources: Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Study Selection: Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. Results: This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster–Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Conclusion: Big data mining has the potential to play an important role in clinical medicine. PMID:26960378

  13. Rethinking big data: A review on the data quality and usage issues

    NASA Astrophysics Data System (ADS)

    Liu, Jianzheng; Li, Jie; Li, Weifeng; Wu, Jiansheng

    2016-05-01

    The recent explosive publications of big data studies have well documented the rise of big data and its ongoing prevalence. Different types of ;big data; have emerged and have greatly enriched spatial information sciences and related fields in terms of breadth and granularity. Studies that were difficult to conduct in the past time due to data availability can now be carried out. However, big data brings lots of ;big errors; in data quality and data usage, which cannot be used as a substitute for sound research design and solid theories. We indicated and summarized the problems faced by current big data studies with regard to data collection, processing and analysis: inauthentic data collection, information incompleteness and noise of big data, unrepresentativeness, consistency and reliability, and ethical issues. Cases of empirical studies are provided as evidences for each problem. We propose that big data research should closely follow good scientific practice to provide reliable and scientific ;stories;, as well as explore and develop techniques and methods to mitigate or rectify those 'big-errors' brought by big data.

  14. Analyzing big data with the hybrid interval regression methods.

    PubMed

    Huang, Chia-Hui; Yang, Keng-Chieh; Kao, Han-Ying

    2014-01-01

    Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.

  15. Analyzing Big Data with the Hybrid Interval Regression Methods

    PubMed Central

    Kao, Han-Ying

    2014-01-01

    Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes. PMID:25143968

  16. Exploring nuclear reactions relevant to Stellar and Big-Bang Nucleosynthesis using High-Energy-Density plasmas at OMEGA and the NIF

    NASA Astrophysics Data System (ADS)

    Gatu Johnson, M.

    2017-10-01

    Thermonuclear reaction rates and nuclear processes have been explored traditionally by means of accelerator experiments, which are difficult to execute at conditions relevant to Stellar Nucleosynthesis (SN) and Big Bang Nucleosynthesis (BBN). High-Energy-Density (HED) plasmas closely mimic astrophysical environments and are an excellent complement to accelerator experiments in exploring SN and BBN-relevant nuclear reactions. To date, our work using HED plasmas at OMEGA and NIF has focused on the complementary 3He+3He, T+3He and T +T reactions. First studies of the T +T reaction indicated the significance of the 5He ground-state resonance in the T +T neutron spectrum. Subsequent T +T experiments showed that the strength of this resonance varies with center-of-mass (c-m) energy in the range of 16-50 keV, a variation that is not fundamentally understood. Studies of the 3He+3He and T+3He reactions have also been conducted at OMEGA at c-m energies of 165 keV and 80 keV, respectively, and the results revealed three things. First, a large cross section for the T+3He- γ branch can be ruled out as an explanation for the anomalously high abundance of 6Li in primordial material. Second, the results contrasted to theoretical modeling indicate that the mirror-symmetry assumption is not enough to capture the differences between T +T and 3He+3He reactions. Third, the elliptical spectrum assumed in the analysis of 3He+3He data obtained in accelerator experiments is incorrect. Preliminary data from recent experiments at the NIF exploring the 3He+3He reaction at c-m energies of 60 keV and 100 keV also indicate that the underlying physics changes with c-m energy. In this talk, we describe these findings and future directions for exploring light-ion reactions at OMEGA and the NIF. The work was supported in part by the US DOE, LLE, and LLNL.

  17. Processing Solutions for Big Data in Astronomy

    NASA Astrophysics Data System (ADS)

    Fillatre, L.; Lepiller, D.

    2016-09-01

    This paper gives a simple introduction to processing solutions applied to massive amounts of data. It proposes a general presentation of the Big Data paradigm. The Hadoop framework, which is considered as the pioneering processing solution for Big Data, is described together with YARN, the integrated Hadoop tool for resource allocation. This paper also presents the main tools for the management of both the storage (NoSQL solutions) and computing capacities (MapReduce parallel processing schema) of a cluster of machines. Finally, more recent processing solutions like Spark are discussed. Big Data frameworks are now able to run complex applications while keeping the programming simple and greatly improving the computing speed.

  18. "Small Steps, Big Rewards": Preventing Type 2 Diabetes

    MedlinePlus

    ... please turn Javascript on. Feature: Diabetes "Small Steps, Big Rewards": Preventing Type 2 Diabetes Past Issues / Fall ... These are the plain facts in "Small Steps. Big Rewards: Prevent Type 2 Diabetes," an education campaign ...

  19. Big Bend National Park

    NASA Image and Video Library

    2017-12-08

    Alternately known as a geologist’s paradise and a geologist’s nightmare, Big Bend National Park in southwestern Texas offers a multitude of rock formations. Sparse vegetation makes finding and observing the rocks easy, but they document a complicated geologic history extending back 500 million years. On May 10, 2002, the Enhanced Thematic Mapper Plus on NASA’s Landsat 7 satellite captured this natural-color image of Big Bend National Park. A black line delineates the park perimeter. The arid landscape appears in muted earth tones, some of the darkest hues associated with volcanic structures, especially the Rosillos and Chisos Mountains. Despite its bone-dry appearance, Big Bend National Park is home to some 1,200 plant species, and hosts more kinds of cacti, birds, and bats than any other U.S. national park. Read more: go.nasa.gov/2bzGaZU Credit: NASA/Landsat7 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  20. Lepton asymmetry, neutrino spectral distortions, and big bang nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Grohs, E.; Fuller, George M.; Kishimoto, C. T.; Paris, Mark W.

    2017-03-01

    We calculate Boltzmann neutrino energy transport with self-consistently coupled nuclear reactions through the weak-decoupling-nucleosynthesis epoch in an early universe with significant lepton numbers. We find that the presence of lepton asymmetry enhances processes which give rise to nonthermal neutrino spectral distortions. Our results reveal how asymmetries in energy and entropy density uniquely evolve for different transport processes and neutrino flavors. The enhanced distortions in the neutrino spectra alter the expected big bang nucleosynthesis light element abundance yields relative to those in the standard Fermi-Dirac neutrino distribution cases. These yields, sensitive to the shapes of the neutrino energy spectra, are also sensitive to the phasing of the growth of distortions and entropy flow with time/scale factor. We analyze these issues and speculate on new sensitivity limits of deuterium and helium to lepton number.

  1. Semantic Web technologies for the big data in life sciences.

    PubMed

    Wu, Hongyan; Yamaguchi, Atsuko

    2014-08-01

    The life sciences field is entering an era of big data with the breakthroughs of science and technology. More and more big data-related projects and activities are being performed in the world. Life sciences data generated by new technologies are continuing to grow in not only size but also variety and complexity, with great speed. To ensure that big data has a major influence in the life sciences, comprehensive data analysis across multiple data sources and even across disciplines is indispensable. The increasing volume of data and the heterogeneous, complex varieties of data are two principal issues mainly discussed in life science informatics. The ever-evolving next-generation Web, characterized as the Semantic Web, is an extension of the current Web, aiming to provide information for not only humans but also computers to semantically process large-scale data. The paper presents a survey of big data in life sciences, big data related projects and Semantic Web technologies. The paper introduces the main Semantic Web technologies and their current situation, and provides a detailed analysis of how Semantic Web technologies address the heterogeneous variety of life sciences big data. The paper helps to understand the role of Semantic Web technologies in the big data era and how they provide a promising solution for the big data in life sciences.

  2. Big data analytics to aid developing livable communities.

    DOT National Transportation Integrated Search

    2015-12-31

    In transportation, ubiquitous deployment of low-cost sensors combined with powerful : computer hardware and high-speed network makes big data available. USDOT defines big : data research in transportation as a number of advanced techniques applied to...

  3. Ontogeny of Big endothelin-1 effects in newborn piglet pulmonary vasculature.

    PubMed

    Liben, S; Stewart, D J; De Marte, J; Perreault, T

    1993-07-01

    Endothelin-1 (ET-1), a 21-amino acid peptide produced by endothelial cells, results from the cleavage of preproendothelin, generating Big ET-1, which is then cleaved by the ET-converting enzyme (ECE) to form ET-1. Big ET-1, like ET-1, is released by endothelial cells. Big ET-1 is equipotent to ET-1 in vivo, whereas its vasoactive effects are less in vitro. It has been suggested that the effects of Big ET-1 depend on its conversion to ET-1. ET-1 has potent vasoactive effects in the newborn pig pulmonary circulation, however, the effects of Big ET-1 remain unknown. Therefore, we studied the effects of Big ET-1 in isolated perfused lungs from 1- and 7-day-old piglets using the ECE inhibitor, phosphoramidon, and the ETA receptor antagonist, BQ-123Na. The rate of conversion of Big ET-1 to ET-1 was measured using radioimmunoassay. ET-1 (10(-13) to 10(-8) M) produced an initial vasodilation, followed by a dose-dependent potent vasoconstriction (P < 0.001), which was equal at both ages. Big ET-1 (10(-11) to 10(-8) M) also produced a dose-dependent vasoconstriction (P < 0.001). The constrictor effects of Big ET-1 and ET-1 were similar in the 1-day-old, whereas in the 7-day-old, the constrictor effect of Big ET-1 was less than that of ET-1 (P < 0.017).(ABSTRACT TRUNCATED AT 250 WORDS)

  4. Infrastructure for Big Data in the Intensive Care Unit.

    PubMed

    Zelechower, Javier; Astudillo, José; Traversaro, Francisco; Redelico, Francisco; Luna, Daniel; Quiros, Fernan; San Roman, Eduardo; Risk, Marcelo

    2017-01-01

    The Big Data paradigm can be applied in intensive care unit, in order to improve the treatment of the patients, with the aim of customized decisions. This poster is about the infrastructure necessary to built a Big Data system for the ICU. Together with the infrastructure, the conformation of a multidisciplinary team is essential to develop Big Data to use in critical care medicine.

  5. 76 FR 7837 - Big Rivers Electric Corporation; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-11

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. NJ11-11-000] Big Rivers Electric Corporation; Notice of Filing Take notice that on February 4, 2011, Big Rivers Electric Corporation (Big Rivers) filed a notice of cancellation of its Second Revised and Restated Open Access...

  6. Data management by using R: big data clinical research series.

    PubMed

    Zhang, Zhongheng

    2015-11-01

    Electronic medical record (EMR) system has been widely used in clinical practice. Instead of traditional record system by hand writing and recording, the EMR makes big data clinical research feasible. The most important feature of big data research is its real-world setting. Furthermore, big data research can provide all aspects of information related to healthcare. However, big data research requires some skills on data management, which however, is always lacking in the curriculum of medical education. This greatly hinders doctors from testing their clinical hypothesis by using EMR. To make ends meet, a series of articles introducing data management techniques are put forward to guide clinicians to big data clinical research. The present educational article firstly introduces some basic knowledge on R language, followed by some data management skills on creating new variables, recoding variables and renaming variables. These are very basic skills and may be used in every project of big data research.

  7. (Quasi)-convexification of Barta's (multi-extrema) bounding theorem: Inf_x\\big(\\ssty\\frac{H\\Phi(x)}{\\Phi(x)} \\big) \\le E_gr \\le Sup_x \\big(\\ssty\\frac{H\\Phi(x)}{\\Phi(x)} \\big)

    NASA Astrophysics Data System (ADS)

    Handy, C. R.

    2006-03-01

    There has been renewed interest in the exploitation of Barta's configuration space theorem (BCST) (Barta 1937 C. R. Acad. Sci. Paris 204 472) which bounds the ground-state energy, Inf_x\\big({{H\\Phi(x)}\\over {\\Phi(x)}} \\big ) \\leq E_gr \\leq Sup_x \\big({{H\\Phi(x)}\\over {\\Phi(x)}}\\big) , by using any Φ lying within the space of positive, bounded, and sufficiently smooth functions, {\\cal C} . Mouchet's (Mouchet 2005 J. Phys. A: Math. Gen. 38 1039) BCST analysis is based on gradient optimization (GO). However, it overlooks significant difficulties: (i) appearance of multi-extrema; (ii) inefficiency of GO for stiff (singular perturbation/strong coupling) problems; (iii) the nonexistence of a systematic procedure for arbitrarily improving the bounds within {\\cal C} . These deficiencies can be corrected by transforming BCST into a moments' representation equivalent, and exploiting a generalization of the eigenvalue moment method (EMM), within the context of the well-known generalized eigenvalue problem (GEP), as developed here. EMM is an alternative eigenenergy bounding, variational procedure, overlooked by Mouchet, which also exploits the positivity of the desired physical solution. Furthermore, it is applicable to Hermitian and non-Hermitian systems with complex-number quantization parameters (Handy and Bessis 1985 Phys. Rev. Lett. 55 931, Handy et al 1988 Phys. Rev. Lett. 60 253, Handy 2001 J. Phys. A: Math. Gen. 34 5065, Handy et al 2002 J. Phys. A: Math. Gen. 35 6359). Our analysis exploits various quasi-convexity/concavity theorems common to the GEP representation. We outline the general theory, and present some illustrative examples.

  8. Keeping up with Big Data--Designing an Introductory Data Analytics Class

    ERIC Educational Resources Information Center

    Hijazi, Sam

    2016-01-01

    Universities need to keep up with the demand of the business world when it comes to Big Data. The exponential increase in data has put additional demands on academia to meet the big gap in education. Business demand for Big Data has surpassed 1.9 million positions in 2015. Big Data, Business Intelligence, Data Analytics, and Data Mining are the…

  9. Big-bang nucleosynthesis revisited

    NASA Technical Reports Server (NTRS)

    Olive, Keith A.; Schramm, David N.; Steigman, Gary; Walker, Terry P.

    1989-01-01

    The homogeneous big-bang nucleosynthesis yields of D, He-3, He-4, and Li-7 are computed taking into account recent measurements of the neutron mean-life as well as updates of several nuclear reaction rates which primarily affect the production of Li-7. The extraction of primordial abundances from observation and the likelihood that the primordial mass fraction of He-4, Y(sub p) is less than or equal to 0.24 are discussed. Using the primordial abundances of D + He-3 and Li-7 we limit the baryon-to-photon ratio (eta in units of 10 exp -10) 2.6 less than or equal to eta(sub 10) less than or equal to 4.3; which we use to argue that baryons contribute between 0.02 and 0.11 to the critical energy density of the universe. An upper limit to Y(sub p) of 0.24 constrains the number of light neutrinos to N(sub nu) less than or equal to 3.4, in excellent agreement with the LEP and SLC collider results. We turn this argument around to show that the collider limit of 3 neutrino species can be used to bound the primordial abundance of He-4: 0.235 less than or equal to Y(sub p) less than or equal to 0.245.

  10. [Applications of eco-environmental big data: Progress and prospect].

    PubMed

    Zhao, Miao Miao; Zhao, Shi Cheng; Zhang, Li Yun; Zhao, Fen; Shao, Rui; Liu, Li Xiang; Zhao, Hai Feng; Xu, Ming

    2017-05-18

    With the advance of internet and wireless communication technology, the fields of ecology and environment have entered a new digital era with the amount of data growing explosively and big data technologies attracting more and more attention. The eco-environmental big data is based airborne and space-/land-based observations of ecological and environmental factors and its ultimate goal is to integrate multi-source and multi-scale data for information mining by taking advantages of cloud computation, artificial intelligence, and modeling technologies. In comparison with other fields, the eco-environmental big data has its own characteristics, such as diverse data formats and sources, data collected with various protocols and standards, and serving different clients and organizations with special requirements. Big data technology has been applied worldwide in ecological and environmental fields including global climate prediction, ecological network observation and modeling, and regional air pollution control. The development of eco-environmental big data in China is facing many problems, such as data sharing issues, outdated monitoring facilities and techno-logies, and insufficient data mining capacity. Despite all this, big data technology is critical to solving eco-environmental problems, improving prediction and warning accuracy on eco-environmental catastrophes, and boosting scientific research in the field in China. We expected that the eco-environmental big data would contribute significantly to policy making and environmental services and management, and thus the sustainable development and eco-civilization construction in China in the coming decades.

  11. Big system: Interactive graphics for the engineer

    NASA Technical Reports Server (NTRS)

    Quenneville, C. E.

    1975-01-01

    The BCS Interactive Graphics System (BIG System) approach to graphics was presented, along with several significant engineering applications. The BIG System precompiler, the graphics support library, and the function requirements of graphics applications are discussed. It was concluded that graphics standardization and a device independent code can be developed to assure maximum graphic terminal transferability.

  12. Insights into big sagebrush seedling storage practices

    Treesearch

    Emily C. Overton; Jeremiah R. Pinto; Anthony S. Davis

    2013-01-01

    Big sagebrush (Artemisia tridentata Nutt. [Asteraceae]) is an essential component of shrub-steppe ecosystems in the Great Basin of the US, where degradation due to altered fire regimes, invasive species, and land use changes have led to increased interest in the production of high-quality big sagebrush seedlings for conservation and restoration projects. Seedling...

  13. Principles of Experimental Design for Big Data Analysis.

    PubMed

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2017-08-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.

  14. Principles of Experimental Design for Big Data Analysis

    PubMed Central

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2016-01-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis. PMID:28883686

  15. Big Data and Nursing: Implications for the Future.

    PubMed

    Topaz, Maxim; Pruinelli, Lisiane

    2017-01-01

    Big data is becoming increasingly more prevalent and it affects the way nurses learn, practice, conduct research and develop policy. The discipline of nursing needs to maximize the benefits of big data to advance the vision of promoting human health and wellbeing. However, current practicing nurses, educators and nurse scientists often lack the required skills and competencies necessary for meaningful use of big data. Some of the key skills for further development include the ability to mine narrative and structured data for new care or outcome patterns, effective data visualization techniques, and further integration of nursing sensitive data into artificial intelligence systems for better clinical decision support. We provide growth-path vision recommendations for big data competencies for practicing nurses, nurse educators, researchers, and policy makers to help prepare the next generation of nurses and improve patient outcomes trough better quality connected health.

  16. Information Retrieval Using Hadoop Big Data Analysis

    NASA Astrophysics Data System (ADS)

    Motwani, Deepak; Madan, Madan Lal

    This paper concern on big data analysis which is the cognitive operation of probing huge amounts of information in an attempt to get uncovers unseen patterns. Through Big Data Analytics Applications such as public and private organization sectors have formed a strategic determination to turn big data into cut throat benefit. The primary occupation of extracting value from big data give rise to a process applied to pull information from multiple different sources; this process is known as extract transforms and lode. This paper approach extract information from log files and Research Paper, awareness reduces the efforts for blueprint finding and summarization of document from several positions. The work is able to understand better Hadoop basic concept and increase the user experience for research. In this paper, we propose an approach for analysis log files for finding concise information which is useful and time saving by using Hadoop. Our proposed approach will be applied on different research papers on a specific domain and applied for getting summarized content for further improvement and make the new content.

  17. Big Science and the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Giudice, Gian Francesco

    2012-03-01

    The Large Hadron Collider (LHC), the particle accelerator operating at CERN, is probably the most complex and ambitious scientific project ever accomplished by humanity. The sheer size of the enterprise, in terms of financial and human resources, naturally raises the question whether society should support such costly basic-research programs. I address this question by first reviewing the process that led to the emergence of Big Science and the role of large projects in the development of science and technology. I then compare the methodologies of Small and Big Science, emphasizing their mutual linkage. Finally, after examining the cost of Big Science projects, I highlight several general aspects of their beneficial implications for society.

  18. The big data processing platform for intelligent agriculture

    NASA Astrophysics Data System (ADS)

    Huang, Jintao; Zhang, Lichen

    2017-08-01

    Big data technology is another popular technology after the Internet of Things and cloud computing. Big data is widely used in many fields such as social platform, e-commerce, and financial analysis and so on. Intelligent agriculture in the course of the operation will produce large amounts of data of complex structure, fully mining the value of these data for the development of agriculture will be very meaningful. This paper proposes an intelligent data processing platform based on Storm and Cassandra to realize the storage and management of big data of intelligent agriculture.

  19. Research Activities at Fermilab for Big Data Movement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mhashilkar, Parag; Wu, Wenji; Kim, Hyun W

    2013-01-01

    Adaptation of 100GE Networking Infrastructure is the next step towards management of Big Data. Being the US Tier-1 Center for the Large Hadron Collider's (LHC) Compact Muon Solenoid (CMS) experiment and the central data center for several other large-scale research collaborations, Fermilab has to constantly deal with the scaling and wide-area distribution challenges of the big data. In this paper, we will describe some of the challenges involved in the movement of big data over 100GE infrastructure and the research activities at Fermilab to address these challenges.

  20. [Utilization of Big Data in Medicine and Future Outlook].

    PubMed

    Kinosada, Yasutomi; Uematsu, Machiko; Fujiwara, Takuya

    2016-03-01

    "Big data" is a new buzzword. The point is not to be dazzled by the volume of data, but rather to analyze it, and convert it into insights, innovations, and business value. There are also real differences between conventional analytics and big data. In this article, we show some results of big data analysis using open DPC (Diagnosis Procedure Combination) data in areas of the central part of JAPAN: Toyama, Ishikawa, Fukui, Nagano, Gifu, Aichi, Shizuoka, and Mie Prefectures. These 8 prefectures contain 51 medical administration areas called the second medical area. By applying big data analysis techniques such as k-means, hierarchical clustering, and self-organizing maps to DPC data, we can visualize the disease structure and detect similarities or variations among the 51 second medical areas. The combination of a big data analysis technique and open DPC data is a very powerful method to depict real figures on patient distribution in Japan.

  1. Research on Technology Innovation Management in Big Data Environment

    NASA Astrophysics Data System (ADS)

    Ma, Yanhong

    2018-02-01

    With the continuous development and progress of the information age, the demand for information is getting larger. The processing and analysis of information data is also moving toward the direction of scale. The increasing number of information data makes people have higher demands on processing technology. The explosive growth of information data onto the current society have prompted the advent of the era of big data. At present, people have more value and significance in producing and processing various kinds of information and data in their lives. How to use big data technology to process and analyze information data quickly to improve the level of big data management is an important stage to promote the current development of information and data processing technology in our country. To some extent, innovative research on the management methods of information technology in the era of big data can enhance our overall strength and make China be an invincible position in the development of the big data era.

  2. Association of Big Endothelin-1 with Coronary Artery Calcification.

    PubMed

    Qing, Ping; Li, Xiao-Lin; Zhang, Yan; Li, Yi-Lin; Xu, Rui-Xia; Guo, Yuan-Lin; Li, Sha; Wu, Na-Qiong; Li, Jian-Jun

    2015-01-01

    The coronary artery calcification (CAC) is clinically considered as one of the important predictors of atherosclerosis. Several studies have confirmed that endothelin-1(ET-1) plays an important role in the process of atherosclerosis formation. The aim of this study was to investigate whether big ET-1 is associated with CAC. A total of 510 consecutively admitted patients from February 2011 to May 2012 in Fu Wai Hospital were analyzed. All patients had received coronary computed tomography angiography and then divided into two groups based on the results of coronary artery calcium score (CACS). The clinical characteristics including traditional and calcification-related risk factors were collected and plasma big ET-1 level was measured by ELISA. Patients with CAC had significantly elevated big ET-1 level compared with those without CAC (0.5 ± 0.4 vs. 0.2 ± 0.2, P<0.001). In the multivariate analysis, big ET-1 (Tertile 2, HR = 3.09, 95% CI 1.66-5.74, P <0.001, Tertile3 HR = 10.42, 95% CI 3.62-29.99, P<0.001) appeared as an independent predictive factor of the presence of CAC. There was a positive correlation of the big ET-1 level with CACS (r = 0.567, p<0.001). The 10-year Framingham risk (%) was higher in the group with CACS>0 and the highest tertile of big ET-1 (P<0.01). The area under the receiver operating characteristic curve for the big ET-1 level in predicting CAC was 0.83 (95% CI 0.79-0.87, p<0.001), with a sensitivity of 70.6% and specificity of 87.7%. The data firstly demonstrated that the plasma big ET-1 level was a valuable independent predictor for CAC in our study.

  3. Meta-analyses of Big Six Interests and Big Five Personality Factors.

    ERIC Educational Resources Information Center

    Larson, Lisa M.; Rottinghaus, Patrick J.; Borgen, Fred H.

    2002-01-01

    Meta-analysis of 24 samples demonstrated overlap between Holland's vocational interest domains (measured by Self Directed Search, Strong Interest Inventory, and Vocational Preference Inventory) and Big Five personality factors (measured by Revised NEO Personalty Inventory). The link is stronger for five interest-personality pairs:…

  4. [Big Data- challenges and risks].

    PubMed

    Krauß, Manuela; Tóth, Tamás; Hanika, Heinrich; Kozlovszky, Miklós; Dinya, Elek

    2015-12-06

    The term "Big Data" is commonly used to describe the growing mass of information being created recently. New conclusions can be drawn and new services can be developed by the connection, processing and analysis of these information. This affects all aspects of life, including health and medicine. The authors review the application areas of Big Data, and present examples from health and other areas. However, there are several preconditions of the effective use of the opportunities: proper infrastructure, well defined regulatory environment with particular emphasis on data protection and privacy. These issues and the current actions for solution are also presented.

  5. Big game habitat use in southeastern Montana

    Treesearch

    James G. MacCracken; Daniel W. Uresk

    1984-01-01

    The loss of suitable, high quality habitat is a major problem facing big game managers in the western United States. Agricultural, water, road and highway, housing, and recreational development have contributed to loss of natural big game habitat (Wallmo et al. 1976, Reed 1981). In the western United States, surface mining of minerals has great potential to adversely...

  6. A Big Data Analytics Methodology Program in the Health Sector

    ERIC Educational Resources Information Center

    Lawler, James; Joseph, Anthony; Howell-Barber, H.

    2016-01-01

    The benefits of Big Data Analytics are cited frequently in the literature. However, the difficulties of implementing Big Data Analytics can limit the number of organizational projects. In this study, the authors evaluate business, procedural and technical factors in the implementation of Big Data Analytics, applying a methodology program. Focusing…

  7. View of New Big Oak Flat Road seen from Old ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of New Big Oak Flat Road seen from Old Wawona Road near location of photograph HAER CA-148-17. Note road cuts, alignment, and tunnels. Devils Dance Floor at left distance. Looking northwest - Big Oak Flat Road, Between Big Oak Flat Entrance & Merced River, Yosemite Village, Mariposa County, CA

  8. The Study of “big data” to support internal business strategists

    NASA Astrophysics Data System (ADS)

    Ge, Mei

    2018-01-01

    How is big data different from previous data analysis systems? The primary purpose behind traditional small data analytics that all managers are more or less familiar with is to support internal business strategies. But big data also offers a promising new dimension: to discover new opportunities to offer customers high-value products and services. The study focus to introduce some strategists which big data support to. Business decisions using big data can also involve some areas for analytics. They include customer satisfaction, customer journeys, supply chains, risk management, competitive intelligence, pricing, discovery and experimentation or facilitating big data discovery.

  9. Occurrence and Partial Characterization of Lettuce big vein associated virus and Mirafiori lettuce big vein virus in Lettuce in Iran.

    PubMed

    Alemzadeh, E; Izadpanah, K

    2012-12-01

    Mirafiori lettuce big vein virus (MiLBVV) and lettuce big vein associated virus (LBVaV) were found in association with big vein disease of lettuce in Iran. Analysis of part of the coat protein (CP) gene of Iranian isolates of LBVaV showed 97.1-100 % nucleotide sequence identity with other LBVaV isolates. Iranian isolates of MiLBVV belonged to subgroup A and showed 88.6-98.8 % nucleotide sequence identity with other isolates of this virus when amplified by PCR primer pair MiLV VP. The occurrence of both viruses in lettuce crop was associated with the presence of resting spores and zoosporangia of the fungus Olpidium brassicae in lettuce roots under field and greenhouse conditions. Two months after sowing lettuce seed in soil collected from a lettuce field with big vein affected plants, all seedlings were positive for LBVaV and MiLBVV, indicating soil transmission of both viruses.

  10. [Contemplation on the application of big data in clinical medicine].

    PubMed

    Lian, Lei

    2015-01-01

    Medicine is another area where big data is being used. The link between clinical treatment and outcome is the key step when applying big data in medicine. In the era of big data, it is critical to collect complete outcome data. Patient follow-up, comprehensive integration of data resources, quality control and standardized data management are the predominant approaches to avoid missing data and data island. Therefore, establishment of systemic patients follow-up protocol and prospective data management strategy are the important aspects of big data in medicine.

  11. Cincinnati Big Area Additive Manufacturing (BAAM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duty, Chad E.; Love, Lonnie J.

    Oak Ridge National Laboratory (ORNL) worked with Cincinnati Incorporated (CI) to demonstrate Big Area Additive Manufacturing which increases the speed of the additive manufacturing (AM) process by over 1000X, increases the size of parts by over 10X and shows a cost reduction of over 100X. ORNL worked with CI to transition the Big Area Additive Manufacturing (BAAM) technology from a proof-of-principle (TRL 2-3) demonstration to a prototype product stage (TRL 7-8).

  12. Informatics in neurocritical care: new ideas for Big Data.

    PubMed

    Flechet, Marine; Grandas, Fabian Güiza; Meyfroidt, Geert

    2016-04-01

    Big data is the new hype in business and healthcare. Data storage and processing has become cheap, fast, and easy. Business analysts and scientists are trying to design methods to mine these data for hidden knowledge. Neurocritical care is a field that typically produces large amounts of patient-related data, and these data are increasingly being digitized and stored. This review will try to look beyond the hype, and focus on possible applications in neurointensive care amenable to Big Data research that can potentially improve patient care. The first challenge in Big Data research will be the development of large, multicenter, and high-quality databases. These databases could be used to further investigate recent findings from mathematical models, developed in smaller datasets. Randomized clinical trials and Big Data research are complementary. Big Data research might be used to identify subgroups of patients that could benefit most from a certain intervention, or can be an alternative in areas where randomized clinical trials are not possible. The processing and the analysis of the large amount of patient-related information stored in clinical databases is beyond normal human cognitive ability. Big Data research applications have the potential to discover new medical knowledge, and improve care in the neurointensive care unit.

  13. Big Data Analytics for Genomic Medicine.

    PubMed

    He, Karen Y; Ge, Dongliang; He, Max M

    2017-02-15

    Genomic medicine attempts to build individualized strategies for diagnostic or therapeutic decision-making by utilizing patients' genomic information. Big Data analytics uncovers hidden patterns, unknown correlations, and other insights through examining large-scale various data sets. While integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a Big Data infrastructure exhibit challenges, they also provide a feasible opportunity to develop an efficient and effective approach to identify clinically actionable genetic variants for individualized diagnosis and therapy. In this paper, we review the challenges of manipulating large-scale next-generation sequencing (NGS) data and diverse clinical data derived from the EHRs for genomic medicine. We introduce possible solutions for different challenges in manipulating, managing, and analyzing genomic and clinical data to implement genomic medicine. Additionally, we also present a practical Big Data toolset for identifying clinically actionable genetic variants using high-throughput NGS data and EHRs.

  14. A Big Data Platform for Storing, Accessing, Mining and Learning Geospatial Data

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Bambacus, M.; Duffy, D.; Little, M. M.

    2017-12-01

    Big Data is becoming a norm in geoscience domains. A platform that is capable to effiently manage, access, analyze, mine, and learn the big data for new information and knowledge is desired. This paper introduces our latest effort on developing such a platform based on our past years' experiences on cloud and high performance computing, analyzing big data, comparing big data containers, and mining big geospatial data for new information. The platform includes four layers: a) the bottom layer includes a computing infrastructure with proper network, computer, and storage systems; b) the 2nd layer is a cloud computing layer based on virtualization to provide on demand computing services for upper layers; c) the 3rd layer is big data containers that are customized for dealing with different types of data and functionalities; d) the 4th layer is a big data presentation layer that supports the effient management, access, analyses, mining and learning of big geospatial data.

  15. The New Improved Big6 Workshop Handbook. Professional Growth Series.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.; Berkowitz, Robert E.

    This handbook is intended to help classroom teachers, teacher-librarians, technology teachers, administrators, parents, community members, and students to learn about the Big6 Skills approach to information and technology skills, to use the Big6 process in their own activities, and to implement a Big6 information and technology skills program. The…

  16. A Hierarchical Visualization Analysis Model of Power Big Data

    NASA Astrophysics Data System (ADS)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  17. Big Data: More than Just Big and More than Just Data.

    PubMed

    Spencer, Gregory A

    2017-01-01

    According to an report, 90 percent of the data in the world today were created in the past two years. This statistic is not surprising given the explosion of mobile phones and other devices that generate data, the Internet of Things (e.g., smart refrigerators), and metadata (data about data). While it might be a stretch to figure out how a healthcare organization can use data generated from an ice maker, data from a plethora of rich and useful sources, when combined with an organization's own data, can produce improved results. How can healthcare organizations leverage these rich and diverse data sources to improve patients' health and make their businesses more competitive? The authors of the two feature articles in this issue of Frontiers provide tangible examples of how their organizations are using big data to meaningfully improve healthcare. Sentara Healthcare and Carolinas HealthCare System both use big data in creative ways that differ because of different business situations, yet are also similar in certain respects.

  18. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    PubMed

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  19. Big data, smart homes and ambient assisted living.

    PubMed

    Vimarlund, V; Wass, S

    2014-08-15

    To discuss how current research in the area of smart homes and ambient assisted living will be influenced by the use of big data. A scoping review of literature published in scientific journals and conference proceedings was performed, focusing on smart homes, ambient assisted living and big data over the years 2011-2014. The health and social care market has lagged behind other markets when it comes to the introduction of innovative IT solutions and the market faces a number of challenges as the use of big data will increase. First, there is a need for a sustainable and trustful information chain where the needed information can be transferred from all producers to all consumers in a structured way. Second, there is a need for big data strategies and policies to manage the new situation where information is handled and transferred independently of the place of the expertise. Finally, there is a possibility to develop new and innovative business models for a market that supports cloud computing, social media, crowdsourcing etc. The interdisciplinary area of big data, smart homes and ambient assisted living is no longer only of interest for IT developers, it is also of interest for decision makers as customers make more informed choices among today's services. In the future it will be of importance to make information usable for managers and improve decision making, tailor smart home services based on big data, develop new business models, increase competition and identify policies to ensure privacy, security and liability.

  20. Big Data, Smart Homes and Ambient Assisted Living

    PubMed Central

    Wass, S.

    2014-01-01

    Summary Objectives To discuss how current research in the area of smart homes and ambient assisted living will be influenced by the use of big data. Methods A scoping review of literature published in scientific journals and conference proceedings was performed, focusing on smart homes, ambient assisted living and big data over the years 2011-2014. Results The health and social care market has lagged behind other markets when it comes to the introduction of innovative IT solutions and the market faces a number of challenges as the use of big data will increase. First, there is a need for a sustainable and trustful information chain where the needed information can be transferred from all producers to all consumers in a structured way. Second, there is a need for big data strategies and policies to manage the new situation where information is handled and transferred independently of the place of the expertise. Finally, there is a possibility to develop new and innovative business models for a market that supports cloud computing, social media, crowdsourcing etc. Conclusions The interdisciplinary area of big data, smart homes and ambient assisted living is no longer only of interest for IT developers, it is also of interest for decision makers as customers make more informed choices among today’s services. In the future it will be of importance to make information usable for managers and improve decision making, tailor smart home services based on big data, develop new business models, increase competition and identify policies to ensure privacy, security and liability. PMID:25123734

  1. Classical and quantum Big Brake cosmology for scalar field and tachyonic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamenshchik, A. Yu.; Manti, S.

    We study a relation between the cosmological singularities in classical and quantum theory, comparing the classical and quantum dynamics in some models possessing the Big Brake singularity - the model based on a scalar field and two models based on a tachyon-pseudo-tachyon field . It is shown that the effect of quantum avoidance is absent for the soft singularities of the Big Brake type while it is present for the Big Bang and Big Crunch singularities. Thus, there is some kind of a classical - quantum correspondence, because soft singularities are traversable in classical cosmology, while the strong Big Bangmore » and Big Crunch singularities are not traversable.« less

  2. Big trees in the southern forest inventory

    Treesearch

    Christopher M. Oswalt; Sonja N. Oswalt; Thomas J. Brandeis

    2010-01-01

    Big trees fascinate people worldwide, inspiring respect, awe, and oftentimes, even controversy. This paper uses a modified version of American Forests’ Big Trees Measuring Guide point system (May 1990) to rank trees sampled between January of 1998 and September of 2007 on over 89,000 plots by the Forest Service, U.S. Department of Agriculture, Forest Inventory and...

  3. Nuclear constraints on the age of the universe

    NASA Technical Reports Server (NTRS)

    Schramm, D. N.

    1982-01-01

    A review is made of how one can use nuclear physics to put rather stringent limits on the age of the universe and thus the cosmic distance scale. The age can be estimated to a fair degree of accuracy. No single measurement of the time since the Big Bang gives a specific, unambiguous age. There are several methods that together fix the age with surprising precision. In particular, there are three totally independent techniques for estimating an age and a fourth technique which involves finding consistency of the other three in the framework of the standard Big Bang cosmological model. The three independent methods are: cosmological dynamics, the age of the oldest stars, and radioactive dating. This paper concentrates on the third of the three methods, and the consistency technique.

  4. Underground Study of Big Bang Nucleosynthesis in the Precision Era of Cosmology

    NASA Astrophysics Data System (ADS)

    Gustavino, Carlo

    2017-03-01

    Big Bang Nucleosinthesis (BBN) theory provides definite predictions for the abundance of light elements produced in the early universe, as far as the knowledge of the relevant nuclear processes of the BBN chain is accurate. At BBN energies (30 ≲ Ecm ≲ 300 MeV) the cross section of many BBN processes is very low because of the Coulomb repulsion between the interacting nuclei. For this reason it is convenient to perform the measurements deep underground. Presently the world's only facility operating underground is LUNA (Laboratory for Undergound Nuclear astrophysics) at LNGS ("Laboratorio Nazionale del Gran Sasso", Italy). In this presentation the BBN measurements of LUNA are briefly reviewed and discussed. It will be shown that the ongoing study of the D(p, γ)3He reaction is of primary importance to derive the baryon density of universe Ωb with high accuracy. Moreover, this study allows to constrain the existence of the so called "dark radiation", composed by undiscovered relativistic species permeating the universe, such as sterile neutrinos.

  5. A genetic algorithm-based job scheduling model for big data analytics.

    PubMed

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  6. ELM Meets Urban Big Data Analysis: Case Studies

    PubMed Central

    Chen, Huajun; Chen, Jiaoyan

    2016-01-01

    In the latest years, the rapid progress of urban computing has engendered big issues, which creates both opportunities and challenges. The heterogeneous and big volume of data and the big difference between physical and virtual worlds have resulted in lots of problems in quickly solving practical problems in urban computing. In this paper, we propose a general application framework of ELM for urban computing. We present several real case studies of the framework like smog-related health hazard prediction and optimal retain store placement. Experiments involving urban data in China show the efficiency, accuracy, and flexibility of our proposed framework. PMID:27656203

  7. Big Data in Psychology: Introduction to Special Issue

    PubMed Central

    Harlow, Lisa L.; Oswald, Frederick L.

    2016-01-01

    The introduction to this special issue on psychological research involving big data summarizes the highlights of 10 articles that address a number of important and inspiring perspectives, issues, and applications. Four common themes that emerge in the articles with respect to psychological research conducted in the area of big data are mentioned, including: 1. The benefits of collaboration across disciplines, such as those in the social sciences, applied statistics, and computer science. Doing so assists in grounding big data research in sound theory and practice, as well as in affording effective data retrieval and analysis. 2. Availability of large datasets on Facebook, Twitter, and other social media sites that provide a psychological window into the attitudes and behaviors of a broad spectrum of the population. 3. Identifying, addressing, and being sensitive to ethical considerations when analyzing large datasets gained from public or private sources. 4. The unavoidable necessity of validating predictive models in big data by applying a model developed on one dataset to a separate set of data or hold-out sample. Translational abstracts that summarize the articles in very clear and understandable terms are included in Appendix A, and a glossary of terms relevant to big data research discussed in the articles is presented in Appendix B. PMID:27918177

  8. Beyond simple charts: Design of visualizations for big health data

    PubMed Central

    Ola, Oluwakemi; Sedig, Kamran

    2016-01-01

    Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data’s utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data. PMID:28210416

  9. Beyond simple charts: Design of visualizations for big health data.

    PubMed

    Ola, Oluwakemi; Sedig, Kamran

    2016-01-01

    Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data's utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data.

  10. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE PAGES

    Klimentov, A.; Buncic, P.; De, K.; ...

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the Pan

  11. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We

  12. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimentov, A.; Buncic, P.; De, K.

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the Pan

  13. Lepton asymmetry, neutrino spectral distortions, and big bang nucleosynthesis

    DOE PAGES

    Grohs, E.; Fuller, George M.; Kishimoto, C. T.; ...

    2017-03-03

    In this paper, we calculate Boltzmann neutrino energy transport with self-consistently coupled nuclear reactions through the weak-decoupling-nucleosynthesis epoch in an early universe with significant lepton numbers. We find that the presence of lepton asymmetry enhances processes which give rise to nonthermal neutrino spectral distortions. Our results reveal how asymmetries in energy and entropy density uniquely evolve for different transport processes and neutrino flavors. The enhanced distortions in the neutrino spectra alter the expected big bang nucleosynthesis light element abundance yields relative to those in the standard Fermi-Dirac neutrino distribution cases. These yields, sensitive to the shapes of the neutrino energymore » spectra, are also sensitive to the phasing of the growth of distortions and entropy flow with time/scale factor. Finally, we analyze these issues and speculate on new sensitivity limits of deuterium and helium to lepton number.« less

  14. Lepton asymmetry, neutrino spectral distortions, and big bang nucleosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grohs, E.; Fuller, George M.; Kishimoto, C. T.

    In this paper, we calculate Boltzmann neutrino energy transport with self-consistently coupled nuclear reactions through the weak-decoupling-nucleosynthesis epoch in an early universe with significant lepton numbers. We find that the presence of lepton asymmetry enhances processes which give rise to nonthermal neutrino spectral distortions. Our results reveal how asymmetries in energy and entropy density uniquely evolve for different transport processes and neutrino flavors. The enhanced distortions in the neutrino spectra alter the expected big bang nucleosynthesis light element abundance yields relative to those in the standard Fermi-Dirac neutrino distribution cases. These yields, sensitive to the shapes of the neutrino energymore » spectra, are also sensitive to the phasing of the growth of distortions and entropy flow with time/scale factor. Finally, we analyze these issues and speculate on new sensitivity limits of deuterium and helium to lepton number.« less

  15. Big bang nucleosynthesis revisited via Trojan Horse method measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pizzone, R. G.; Spartá, R.; Spitaleri, C.

    Nuclear reaction rates are among the most important input for understanding primordial nucleosynthesis and, therefore, for a quantitative description of the early universe. An up-to-date compilation of direct cross-sections of {sup 2}H(d, p){sup 3}H, {sup 2}H(d, n){sup 3}He, {sup 7}Li(p, α){sup 4}He, and {sup 3}He(d, p){sup 4}He reactions is given. These are among the most uncertain cross-sections used and input for big bang nucleosynthesis calculations. Their measurements through the Trojan Horse method are also reviewed and compared with direct data. The reaction rates and the corresponding recommended errors in this work were used as input for primordial nucleosynthesis calculations tomore » evaluate their impact on the {sup 2}H, {sup 3,4}He, and {sup 7}Li primordial abundances, which are then compared with observations.« less

  16. BIG: a large-scale data integration tool for renal physiology.

    PubMed

    Zhao, Yue; Yang, Chin-Rang; Raghuram, Viswanathan; Parulekar, Jaya; Knepper, Mark A

    2016-10-01

    Due to recent advances in high-throughput techniques, we and others have generated multiple proteomic and transcriptomic databases to describe and quantify gene expression, protein abundance, or cellular signaling on the scale of the whole genome/proteome in kidney cells. The existence of so much data from diverse sources raises the following question: "How can researchers find information efficiently for a given gene product over all of these data sets without searching each data set individually?" This is the type of problem that has motivated the "Big-Data" revolution in Data Science, which has driven progress in fields such as marketing. Here we present an online Big-Data tool called BIG (Biological Information Gatherer) that allows users to submit a single online query to obtain all relevant information from all indexed databases. BIG is accessible at http://big.nhlbi.nih.gov/.

  17. BigData as a Driver for Capacity Building in Astrophysics

    NASA Astrophysics Data System (ADS)

    Shastri, Prajval

    2015-08-01

    Exciting public interest in astrophysics acquires new significance in the era of Big Data. Since Big Data involves advanced technologies of both software and hardware, astrophysics with Big Data has the potential to inspire young minds with diverse inclinations - i.e., not just those attracted to physics but also those pursuing engineering careers. Digital technologies have become steadily cheaper, which can enable expansion of the Big Data user pool considerably, especially to communities that may not yet be in the astrophysics mainstream, but have high potential because of access to thesetechnologies. For success, however, capacity building at the early stages becomes key. The development of on-line pedagogical resources in astrophysics, astrostatistics, data-mining and data visualisation that are designed around the big facilities of the future can be an important effort that drives such capacity building, especially if facilitated by the IAU.

  18. The dominance of big pharma: power.

    PubMed

    Edgar, Andrew

    2013-05-01

    The purpose of this paper is to provide a normative model for the assessment of the exercise of power by Big Pharma. By drawing on the work of Steven Lukes, it will be argued that while Big Pharma is overtly highly regulated, so that its power is indeed restricted in the interests of patients and the general public, the industry is still able to exercise what Lukes describes as a third dimension of power. This entails concealing the conflicts of interest and grievances that Big Pharma may have with the health care system, physicians and patients, crucially through rhetorical engagements with Patient Advocacy Groups that seek to shape public opinion, and also by marginalising certain groups, excluding them from debates over health care resource allocation. Three issues will be examined: the construction of a conception of the patient as expert patient or consumer; the phenomenon of disease mongering; the suppression or distortion of debates over resource allocation.

  19. Big data for space situation awareness

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Pugh, Mark; Sheaff, Carolyn; Raquepas, Joe; Rocci, Peter

    2017-05-01

    Recent advances in big data (BD) have focused research on the volume, velocity, veracity, and variety of data. These developments enable new opportunities in information management, visualization, machine learning, and information fusion that have potential implications for space situational awareness (SSA). In this paper, we explore some of these BD trends as applicable for SSA towards enhancing the space operating picture. The BD developments could increase in measures of performance and measures of effectiveness for future management of the space environment. The global SSA influences include resident space object (RSO) tracking and characterization, cyber protection, remote sensing, and information management. The local satellite awareness can benefit from space weather, health monitoring, and spectrum management for situation space understanding. One area in big data of importance to SSA is value - getting the correct data/information at the right time, which corresponds to SSA visualization for the operator. A SSA big data example is presented supporting disaster relief for space situation awareness, assessment, and understanding.

  20. Big Data Analytics for Genomic Medicine

    PubMed Central

    He, Karen Y.; Ge, Dongliang; He, Max M.

    2017-01-01

    Genomic medicine attempts to build individualized strategies for diagnostic or therapeutic decision-making by utilizing patients’ genomic information. Big Data analytics uncovers hidden patterns, unknown correlations, and other insights through examining large-scale various data sets. While integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a Big Data infrastructure exhibit challenges, they also provide a feasible opportunity to develop an efficient and effective approach to identify clinically actionable genetic variants for individualized diagnosis and therapy. In this paper, we review the challenges of manipulating large-scale next-generation sequencing (NGS) data and diverse clinical data derived from the EHRs for genomic medicine. We introduce possible solutions for different challenges in manipulating, managing, and analyzing genomic and clinical data to implement genomic medicine. Additionally, we also present a practical Big Data toolset for identifying clinically actionable genetic variants using high-throughput NGS data and EHRs. PMID:28212287

  1. Big Sky and Greenhorn Elemental Comparison

    NASA Image and Video Library

    2015-12-17

    NASA's Curiosity Mars rover examined both the "Greenhorn" and "Big Sky" targets with the rover's Alpha Particle X-ray Spectrometer (APXS) instrument. Greenhorn is located within an altered fracture zone and has an elevated concentration of silica (about 60 percent by weight). Big Sky is the unaltered counterpart for comparison. The bar plot on the left shows scaled concentrations as analyzed by Curiosity's APXS. The bar plot on the right shows what the Big Sky composition would look like if silica (SiO2) and calcium-sulfate (both abumdant in Greenhorn) were added. The similarity in the resulting composition suggests that much of the chemistry of Greenhorn could be explained by the addition of silica. Ongoing research aims to distinguish between that possible explanation for silicon enrichment and an alternative of silicon being left behind when some other elements were removed by acid weathering. http://photojournal.jpl.nasa.gov/catalog/PIA20275

  2. Adapting bioinformatics curricula for big data

    PubMed Central

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  3. The caBIG Terminology Review Process

    PubMed Central

    Cimino, James J.; Hayamizu, Terry F.; Bodenreider, Olivier; Davis, Brian; Stafford, Grace A.; Ringwald, Martin

    2009-01-01

    The National Cancer Institute (NCI) is developing an integrated biomedical informatics infrastructure, the cancer Biomedical Informatics Grid (caBIG®), to support collaboration within the cancer research community. A key part of the caBIG architecture is the establishment of terminology standards for representing data. In order to evaluate the suitability of existing controlled terminologies, the caBIG Vocabulary and Data Elements Workspace (VCDE WS) working group has developed a set of criteria that serve to assess a terminology's structure, content, documentation, and editorial process. This paper describes the evolution of these criteria and the results of their use in evaluating four standard terminologies: the Gene Ontology (GO), the NCI Thesaurus (NCIt), the Common Terminology for Adverse Events (known as CTCAE), and the laboratory portion of the Logical Objects, Identifiers, Names and Codes (LOINC). The resulting caBIG criteria are presented as a matrix that may be applicable to any terminology standardization effort. PMID:19154797

  4. [Big data approaches in psychiatry: examples in depression research].

    PubMed

    Bzdok, D; Karrer, T M; Habel, U; Schneider, F

    2017-11-29

    The exploration and therapy of depression is aggravated by heterogeneous etiological mechanisms and various comorbidities. With the growing trend towards big data in psychiatry, research and therapy can increasingly target the individual patient. This novel objective requires special methods of analysis. The possibilities and challenges of the application of big data approaches in depression are examined in closer detail. Examples are given to illustrate the possibilities of big data approaches in depression research. Modern machine learning methods are compared to traditional statistical methods in terms of their potential in applications to depression. Big data approaches are particularly suited to the analysis of detailed observational data, the prediction of single data points or several clinical variables and the identification of endophenotypes. A current challenge lies in the transfer of results into the clinical treatment of patients with depression. Big data approaches enable biological subtypes in depression to be identified and predictions in individual patients to be made. They have enormous potential for prevention, early diagnosis, treatment choice and prognosis of depression as well as for treatment development.

  5. A practical guide to big data research in psychology.

    PubMed

    Chen, Eric Evan; Wojcik, Sean P

    2016-12-01

    The massive volume of data that now covers a wide variety of human behaviors offers researchers in psychology an unprecedented opportunity to conduct innovative theory- and data-driven field research. This article is a practical guide to conducting big data research, covering data management, acquisition, processing, and analytics (including key supervised and unsupervised learning data mining methods). It is accompanied by walkthrough tutorials on data acquisition, text analysis with latent Dirichlet allocation topic modeling, and classification with support vector machines. Big data practitioners in academia, industry, and the community have built a comprehensive base of tools and knowledge that makes big data research accessible to researchers in a broad range of fields. However, big data research does require knowledge of software programming and a different analytical mindset. For those willing to acquire the requisite skills, innovative analyses of unexpected or previously untapped data sources can offer fresh ways to develop, test, and extend theories. When conducted with care and respect, big data research can become an essential complement to traditional research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Nursing Knowledge: Big Data Science-Implications for Nurse Leaders.

    PubMed

    Westra, Bonnie L; Clancy, Thomas R; Sensmeier, Joyce; Warren, Judith J; Weaver, Charlotte; Delaney, Connie W

    2015-01-01

    The integration of Big Data from electronic health records and other information systems within and across health care enterprises provides an opportunity to develop actionable predictive models that can increase the confidence of nursing leaders' decisions to improve patient outcomes and safety and control costs. As health care shifts to the community, mobile health applications add to the Big Data available. There is an evolving national action plan that includes nursing data in Big Data science, spearheaded by the University of Minnesota School of Nursing. For the past 3 years, diverse stakeholders from practice, industry, education, research, and professional organizations have collaborated through the "Nursing Knowledge: Big Data Science" conferences to create and act on recommendations for inclusion of nursing data, integrated with patient-generated, interprofessional, and contextual data. It is critical for nursing leaders to understand the value of Big Data science and the ways to standardize data and workflow processes to take advantage of newer cutting edge analytics to support analytic methods to control costs and improve patient quality and safety.

  7. From big data to deep insight in developmental science.

    PubMed

    Gilmore, Rick O

    2016-01-01

    The use of the term 'big data' has grown substantially over the past several decades and is now widespread. In this review, I ask what makes data 'big' and what implications the size, density, or complexity of datasets have for the science of human development. A survey of existing datasets illustrates how existing large, complex, multilevel, and multimeasure data can reveal the complexities of developmental processes. At the same time, significant technical, policy, ethics, transparency, cultural, and conceptual issues associated with the use of big data must be addressed. Most big developmental science data are currently hard to find and cumbersome to access, the field lacks a culture of data sharing, and there is no consensus about who owns or should control research data. But, these barriers are dissolving. Developmental researchers are finding new ways to collect, manage, store, share, and enable others to reuse data. This promises a future in which big data can lead to deeper insights about some of the most profound questions in behavioral science. © 2016 The Authors. WIREs Cognitive Science published by Wiley Periodicals, Inc.

  8. Translating Big Data into Smart Data for Veterinary Epidemiology

    PubMed Central

    VanderWaal, Kimberly; Morrison, Robert B.; Neuhauser, Claudia; Vilalta, Carles; Perez, Andres M.

    2017-01-01

    The increasing availability and complexity of data has led to new opportunities and challenges in veterinary epidemiology around how to translate abundant, diverse, and rapidly growing “big” data into meaningful insights for animal health. Big data analytics are used to understand health risks and minimize the impact of adverse animal health issues through identifying high-risk populations, combining data or processes acting at multiple scales through epidemiological modeling approaches, and harnessing high velocity data to monitor animal health trends and detect emerging health threats. The advent of big data requires the incorporation of new skills into veterinary epidemiology training, including, for example, machine learning and coding, to prepare a new generation of scientists and practitioners to engage with big data. Establishing pipelines to analyze big data in near real-time is the next step for progressing from simply having “big data” to create “smart data,” with the objective of improving understanding of health risks, effectiveness of management and policy decisions, and ultimately preventing or at least minimizing the impact of adverse animal health issues. PMID:28770216

  9. Frontiers of Big Bang cosmology and primordial nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Mathews, Grant J.; Cheoun, Myung-Ki; Kajino, Toshitaka; Kusakabe, Motohiko; Yamazaki, Dai G.

    2012-11-01

    We summarize some current research on the formation and evolution of the universe and overview some of the key questions surrounding the the big bang. There are really only two observational cosmological probes of the physics of the early universe. Of those two, the only probe during the relevant radiation dominated epoch is the yield of light elements during the epoch of big bang nucleosynthesis. The synthesis of light elements occurs in the temperature regime from 108 to 1010 K and times of about 1 to 104 sec into the big bang. The other probe is the spectrum of temperature fluctuations in the CMB which (among other things) contains information of the first quantum fluctuations in the universe, along with details of the distribution and evolution of dark matter, baryonic matter and photons up to the surface of photon last scattering. Here, we emphasize the role of these probes in answering some key questions of the big bang and early universe cosmology.

  10. Autoantibodies to two novel peptides in seronegative and early rheumatoid arthritis.

    PubMed

    De Winter, Liesbeth M; Hansen, Wendy L J; van Steenbergen, Hanna W; Geusens, Piet; Lenaerts, Jan; Somers, Klaartje; Stinissen, Piet; van der Helm-van Mil, Annette H M; Somers, Veerle

    2016-08-01

    Despite recent progress in biomarker discovery for RA diagnostics, still over one-third of RA patients-and even more in early disease-present without RF or ACPA. The aim of this study was to confirm the presence of previously identified autoantibodies to novel Hasselt University (UH) peptides in early and seronegative RA. Screening for antibodies against novel UH peptides UH-RA.1, UH-RA.9, UH-RA.14 and UH-RA.21, was performed in two large independent cohorts. Peptide ELISAs were developed to screen for the presence of antibodies to UH-RA peptides. First, 292 RA patients (including 39 early patients), 90 rheumatic and 97 healthy controls from UH were studied. Antibody reactivity to two peptides (UH-RA.1 and UH-RA.21) was also evaluated in 600 RA patients, 309 patients with undifferentiated arthritis and 157 rheumatic controls from the Leiden Early Arthritis Clinic cohort. In both cohorts, 38% of RA patients were seronegative for RF and ACPA. Testing for autoantibodies to UH-RA.1 and UH-RA.21 reduced the serological gap from 38% to 29% in the UH cohort (P = 0.03) and from 38% to 32% in the Leiden Early Arthritis Clinic cohort (P = 0.01). Furthermore, 19-33% of early RA patients carried antibodies to these peptides. Specificities in rheumatic controls ranged from 82 to 96%. Whereas antibodies against UH-RA.1 were related to remission, anti-UH-RA.21 antibodies were associated with inflammation, joint erosion and higher tender and swollen joint counts. This study validates the presence of antibody reactivity to novel UH-RA peptides in seronegative and early RA. This might reinforce current diagnostics and improve early diagnosis and intervention in RA. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. The BIG Score and Prediction of Mortality in Pediatric Blunt Trauma.

    PubMed

    Davis, Adrienne L; Wales, Paul W; Malik, Tahira; Stephens, Derek; Razik, Fathima; Schuh, Suzanne

    2015-09-01

    To examine the association between in-hospital mortality and the BIG (composed of the base deficit [B], International normalized ratio [I], Glasgow Coma Scale [G]) score measured on arrival to the emergency department in pediatric blunt trauma patients, adjusted for pre-hospital intubation, volume administration, and presence of hypotension and head injury. We also examined the association between the BIG score and mortality in patients requiring admission to the intensive care unit (ICU). A retrospective 2001-2012 trauma database review of patients with blunt trauma ≤ 17 years old with an Injury Severity score ≥ 12. Charts were reviewed for in-hospital mortality, components of the BIG score upon arrival to the emergency department, prehospital intubation, crystalloids ≥ 20 mL/kg, presence of hypotension, head injury, and disposition. 50/621 (8%) of the study patients died. Independent mortality predictors were the BIG score (OR 11, 95% CI 6-25), prior fluid bolus (OR 3, 95% CI 1.3-9), and prior intubation (OR 8, 95% CI 2-40). The area under the receiver operating characteristic curve was 0.95 (CI 0.93-0.98), with the optimal BIG cutoff of 16. With BIG <16, death rate was 3/496 (0.006, 95% CI 0.001-0.007) vs 47/125 (0.38, 95% CI 0.15-0.7) with BIG ≥ 16, (P < .0001). In patients requiring admission to the ICU, the BIG score remained predictive of mortality (OR 14.3, 95% CI 7.3-32, P < .0001). The BIG score accurately predicts mortality in a population of North American pediatric patients with blunt trauma independent of pre-hospital interventions, presence of head injury, and hypotension, and identifies children with a high probability of survival (BIG <16). The BIG score is also associated with mortality in pediatric patients with trauma requiring admission to the ICU. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Big-City Rules

    ERIC Educational Resources Information Center

    Gordon, Dan

    2011-01-01

    When it comes to implementing innovative classroom technology programs, urban school districts face significant challenges stemming from their big-city status. These range from large bureaucracies, to scalability, to how to meet the needs of a more diverse group of students. Because of their size, urban districts tend to have greater distance…

  13. Big(ger) Data as Better Data in Open Distance Learning

    ERIC Educational Resources Information Center

    Prinsloo, Paul; Archer, Elizabeth; Barnes, Glen; Chetty, Yuraisha; van Zyl, Dion

    2015-01-01

    In the context of the hype, promise and perils of Big Data and the currently dominant paradigm of data-driven decision-making, it is important to critically engage with the potential of Big Data for higher education. We do not question the potential of Big Data, but we do raise a number of issues, and present a number of theses to be seriously…

  14. Big Data Analysis Framework for Healthcare and Social Sectors in Korea

    PubMed Central

    Song, Tae-Min

    2015-01-01

    Objectives We reviewed applications of big data analysis of healthcare and social services in developed countries, and subsequently devised a framework for such an analysis in Korea. Methods We reviewed the status of implementing big data analysis of health care and social services in developed countries, and strategies used by the Ministry of Health and Welfare of Korea (Government 3.0). We formulated a conceptual framework of big data in the healthcare and social service sectors at the national level. As a specific case, we designed a process and method of social big data analysis on suicide buzz. Results Developed countries (e.g., the United States, the UK, Singapore, Australia, and even OECD and EU) are emphasizing the potential of big data, and using it as a tool to solve their long-standing problems. Big data strategies for the healthcare and social service sectors were formulated based on an ICT-based policy of current government and the strategic goals of the Ministry of Health and Welfare. We suggest a framework of big data analysis in the healthcare and welfare service sectors separately and assigned them tentative names: 'health risk analysis center' and 'integrated social welfare service network'. A framework of social big data analysis is presented by applying it to the prevention and proactive detection of suicide in Korea. Conclusions There are some concerns with the utilization of big data in the healthcare and social welfare sectors. Thus, research on these issues must be conducted so that sophisticated and practical solutions can be reached. PMID:25705552

  15. Big data analysis framework for healthcare and social sectors in Korea.

    PubMed

    Song, Tae-Min; Ryu, Seewon

    2015-01-01

    We reviewed applications of big data analysis of healthcare and social services in developed countries, and subsequently devised a framework for such an analysis in Korea. We reviewed the status of implementing big data analysis of health care and social services in developed countries, and strategies used by the Ministry of Health and Welfare of Korea (Government 3.0). We formulated a conceptual framework of big data in the healthcare and social service sectors at the national level. As a specific case, we designed a process and method of social big data analysis on suicide buzz. Developed countries (e.g., the United States, the UK, Singapore, Australia, and even OECD and EU) are emphasizing the potential of big data, and using it as a tool to solve their long-standing problems. Big data strategies for the healthcare and social service sectors were formulated based on an ICT-based policy of current government and the strategic goals of the Ministry of Health and Welfare. We suggest a framework of big data analysis in the healthcare and welfare service sectors separately and assigned them tentative names: 'health risk analysis center' and 'integrated social welfare service network'. A framework of social big data analysis is presented by applying it to the prevention and proactive detection of suicide in Korea. There are some concerns with the utilization of big data in the healthcare and social welfare sectors. Thus, research on these issues must be conducted so that sophisticated and practical solutions can be reached.

  16. Female "Big Fish" Swimming against the Tide: The "Big-Fish-Little-Pond Effect" and Gender-Ratio in Special Gifted Classes

    ERIC Educational Resources Information Center

    Preckel, Franzis; Zeidner, Moshe; Goetz, Thomas; Schleyer, Esther Jane

    2008-01-01

    This study takes a second look at the "big-fish-little-pond effect" (BFLPE) on a national sample of 769 gifted Israeli students (32% female) previously investigated by Zeidner and Schleyer (Zeidner, M., & Schleyer, E. J., (1999a). "The big-fish-little-pond effect for academic self-concept, test anxiety, and school grades in…

  17. Privacy Challenges of Genomic Big Data.

    PubMed

    Shen, Hong; Ma, Jian

    2017-01-01

    With the rapid advancement of high-throughput DNA sequencing technologies, genomics has become a big data discipline where large-scale genetic information of human individuals can be obtained efficiently with low cost. However, such massive amount of personal genomic data creates tremendous challenge for privacy, especially given the emergence of direct-to-consumer (DTC) industry that provides genetic testing services. Here we review the recent development in genomic big data and its implications on privacy. We also discuss the current dilemmas and future challenges of genomic privacy.

  18. Big two personality and big three mate preferences: similarity attracts, but country-level mate preferences crucially matter.

    PubMed

    Gebauer, Jochen E; Leary, Mark R; Neberich, Wiebke

    2012-12-01

    People differ regarding their "Big Three" mate preferences of attractiveness, status, and interpersonal warmth. We explain these differences by linking them to the "Big Two" personality dimensions of agency/competence and communion/warmth. The similarity-attracts hypothesis predicts that people high in agency prefer attractiveness and status in mates, whereas those high in communion prefer warmth. However, these effects may be moderated by agentics' tendency to contrast from ambient culture, and communals' tendency to assimilate to ambient culture. Attending to such agentic-cultural-contrast and communal-cultural-assimilation crucially qualifies the similarity-attracts hypothesis. Data from 187,957 online-daters across 11 countries supported this model for each of the Big Three. For example, agentics-more so than communals-preferred attractiveness, but this similarity-attracts effect virtually vanished in attractiveness-valuing countries. This research may reconcile inconsistencies in the literature while utilizing nonhypothetical and consequential mate preference reports that, for the first time, were directly linked to mate choice.

  19. Development and Operation of an Automatic Rotor Trim Control System for use During the UH-60 Individual Blade Control Wind Tunnel Test

    NASA Technical Reports Server (NTRS)

    Theodore, Colin R.

    2010-01-01

    A full-scale wind tunnel test to evaluate the effects of Individual Blade Control (IBC) on the performance, vibration, noise and loads of a UH-60A rotor was recently completed in the National Full-Scale Aerodynamics Complex (NFAC) 40- by 80-Foot Wind Tunnel [1]. A key component of this wind tunnel test was an automatic rotor trim control system that allowed the rotor trim state to be set more precisely, quickly and repeatably than was possible with the rotor operator setting the trim condition manually. The trim control system was also able to maintain the desired trim condition through changes in IBC actuation both in open- and closed-loop IBC modes, and through long-period transients in wind tunnel flow. This ability of the trim control system to automatically set and maintain a steady rotor trim enabled the effects of different IBC inputs to be compared at common trim conditions and to perform these tests quickly without requiring the rotor operator to re-trim the rotor. The trim control system described in this paper was developed specifically for use during the IBC wind tunnel test

  20. Big Biology: Supersizing Science During the Emergence of the 21st Century

    PubMed Central

    Vermeulen, Niki

    2017-01-01

    Ist Biologie das jüngste Mitglied in der Familie von Big Science? Die vermehrte Zusammenarbeit in der biologischen Forschung wurde in der Folge des Human Genome Project zwar zum Gegenstand hitziger Diskussionen, aber Debatten und Reflexionen blieben meist im Polemischen verhaftet und zeigten eine begrenzte Wertschätzung für die Vielfalt und Erklärungskraft des Konzepts von Big Science. Zur gleichen Zeit haben Wissenschafts- und Technikforscher/innen in ihren Beschreibungen des Wandels der Forschungslandschaft die Verwendung des Begriffs Big Science gemieden. Dieser interdisziplinäre Artikel kombiniert eine begriffliche Analyse des Konzepts von Big Science mit unterschiedlichen Daten und Ideen aus einer Multimethodenuntersuchung mehrerer großer Forschungsprojekte in der Biologie. Ziel ist es, ein empirisch fundiertes, nuanciertes und analytisch nützliches Verständnis von Big Biology zu entwickeln und die normativen Debatten mit ihren einfachen Dichotomien und rhetorischen Positionen hinter sich zu lassen. Zwar kann das Konzept von Big Science als eine Mode in der Wissenschaftspolitik gesehen werden – inzwischen vielleicht sogar als ein altmodisches Konzept –, doch lautet meine innovative Argumentation, dass dessen analytische Verwendung unsere Aufmerksamkeit auf die Ausweitung der Zusammenarbeit in den Biowissenschaften lenkt. Die Analyse von Big Biology zeigt Unterschiede zu Big Physics und anderen Formen von Big Science, namentlich in den Mustern der Forschungsorganisation, der verwendeten Technologien und der gesellschaftlichen Zusammenhänge, in denen sie tätig ist. So können Reflexionen über Big Science, Big Biology und ihre Beziehungen zur Wissensproduktion die jüngsten Behauptungen über grundlegende Veränderungen in der Life Science-Forschung in einen historischen Kontext stellen. PMID:27215209

  1. Preliminary survey of the mayflies (Ephemeroptera) and caddisflies (Trichoptera) of Big Bend Ranch State Park and Big Bend National Park

    PubMed Central

    Baumgardner, David E.; Bowles, David E.

    2005-01-01

    The mayfly (Insecta: Ephemeroptera) and caddisfly (Insecta: Trichoptera) fauna of Big Bend National Park and Big Bend Ranch State Park are reported based upon numerous records. For mayflies, sixteen species representing four families and twelve genera are reported. By comparison, thirty-five species of caddisflies were collected during this study representing seventeen genera and nine families. Although the Rio Grande supports the greatest diversity of mayflies (n=9) and caddisflies (n=14), numerous spring-fed creeks throughout the park also support a wide variety of species. A general lack of data on the distribution and abundance of invertebrates in Big Bend National and State Park is discussed, along with the importance of continuing this type of research. PMID:17119610

  2. Some nuclear physics aspects of BBN

    NASA Astrophysics Data System (ADS)

    Coc, Alain

    2017-09-01

    Primordial or big bang nucleosynthesis (BBN) is now a parameter free theory whose predictions are in good overall agreement with observations. However, the 7 Li calculated abundance is significantly higher than the one deduced from spectroscopic observations. Nuclear physics solutions to this lithium problem have been investigated by experimental means. Other solutions which were considered involve exotic sources of extra neutrons which inevitably leads to an increase of the deuterium abundance, but this seems now excluded by recent deuterium observations.

  3. BIG: a large-scale data integration tool for renal physiology

    PubMed Central

    Zhao, Yue; Yang, Chin-Rang; Raghuram, Viswanathan; Parulekar, Jaya

    2016-01-01

    Due to recent advances in high-throughput techniques, we and others have generated multiple proteomic and transcriptomic databases to describe and quantify gene expression, protein abundance, or cellular signaling on the scale of the whole genome/proteome in kidney cells. The existence of so much data from diverse sources raises the following question: “How can researchers find information efficiently for a given gene product over all of these data sets without searching each data set individually?” This is the type of problem that has motivated the “Big-Data” revolution in Data Science, which has driven progress in fields such as marketing. Here we present an online Big-Data tool called BIG (Biological Information Gatherer) that allows users to submit a single online query to obtain all relevant information from all indexed databases. BIG is accessible at http://big.nhlbi.nih.gov/. PMID:27279488

  4. AmeriFlux US-Rms RCEW Mountain Big Sagebrush

    DOE Data Explorer

    Flerchinger, Gerald [USDA Agricultural Research Service

    2017-01-01

    This is the AmeriFlux version of the carbon flux data for the site US-Rms RCEW Mountain Big Sagebrush. Site Description - The site is located on the USDA-ARS's Reynolds Creek Experimental Watershed. It is dominated by mountain big sagebrush on land managed by USDI Bureau of Land Management.

  5. Vertebrate richness and biogeography in the Big Thicket of Texas

    Treesearch

    Michael H MacRoberts; Barbara R. MacRoberts; D. Craig Rudolph

    2010-01-01

    The Big Thicket of Texas has been described as rich in species and a “crossroads:” a place where organisms from many different regions meet. We examine the species richness and regional affiliations of Big Thicket vertebrates. We found that the Big Thicket is neither exceptionally rich in vertebrates nor is it a crossroads for vertebrates. Its vertebrate fauna is...

  6. Creating value in health care through big data: opportunities and policy implications.

    PubMed

    Roski, Joachim; Bo-Linn, George W; Andrews, Timothy A

    2014-07-01

    Big data has the potential to create significant value in health care by improving outcomes while lowering costs. Big data's defining features include the ability to handle massive data volume and variety at high velocity. New, flexible, and easily expandable information technology (IT) infrastructure, including so-called data lakes and cloud data storage and management solutions, make big-data analytics possible. However, most health IT systems still rely on data warehouse structures. Without the right IT infrastructure, analytic tools, visualization approaches, work flows, and interfaces, the insights provided by big data are likely to be limited. Big data's success in creating value in the health care sector may require changes in current polices to balance the potential societal benefits of big-data approaches and the protection of patients' confidentiality. Other policy implications of using big data are that many current practices and policies related to data use, access, sharing, privacy, and stewardship need to be revised. Project HOPE—The People-to-People Health Foundation, Inc.

  7. Big Data, Big Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, Bill

    Data—lots of data—generated in seconds and piling up on the internet, streaming and stored in countless databases. Big data is important for commerce, society and our nation’s security. Yet the volume, velocity, variety and veracity of data is simply too great for any single analyst to make sense of alone. It requires advanced, data-intensive computing. Simply put, data-intensive computing is the use of sophisticated computers to sort through mounds of information and present analysts with solutions in the form of graphics, scenarios, formulas, new hypotheses and more. This scientific capability is foundational to PNNL’s energy, environment and security missions. Seniormore » Scientist and Division Director Bill Pike and his team are developing analytic tools that are used to solve important national challenges, including cyber systems defense, power grid control systems, intelligence analysis, climate change and scientific exploration.« less

  8. Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm

    NASA Astrophysics Data System (ADS)

    Hasançebi, O.; Kazemzadeh Azad, S.

    2014-01-01

    This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.

  9. How quantum is the big bang?

    PubMed

    Bojowald, Martin

    2008-06-06

    When quantum gravity is used to discuss the big bang singularity, the most important, though rarely addressed, question is what role genuine quantum degrees of freedom play. Here, complete effective equations are derived for isotropic models with an interacting scalar to all orders in the expansions involved. The resulting coupling terms show that quantum fluctuations do not affect the bounce much. Quantum correlations, however, do have an important role and could even eliminate the bounce. How quantum gravity regularizes the big bang depends crucially on properties of the quantum state.

  10. 76 FR 47141 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-04

    ....us , with the words Big Horn County RAC in the subject line. Facsimilies may be sent to 307-674-2668... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. [[Page 47142

  11. Big data science: A literature review of nursing research exemplars.

    PubMed

    Westra, Bonnie L; Sylvia, Martha; Weinfurter, Elizabeth F; Pruinelli, Lisiane; Park, Jung In; Dodd, Dianna; Keenan, Gail M; Senk, Patricia; Richesson, Rachel L; Baukner, Vicki; Cruz, Christopher; Gao, Grace; Whittenburg, Luann; Delaney, Connie W

    Big data and cutting-edge analytic methods in nursing research challenge nurse scientists to extend the data sources and analytic methods used for discovering and translating knowledge. The purpose of this study was to identify, analyze, and synthesize exemplars of big data nursing research applied to practice and disseminated in key nursing informatics, general biomedical informatics, and nursing research journals. A literature review of studies published between 2009 and 2015. There were 650 journal articles identified in 17 key nursing informatics, general biomedical informatics, and nursing research journals in the Web of Science database. After screening for inclusion and exclusion criteria, 17 studies published in 18 articles were identified as big data nursing research applied to practice. Nurses clearly are beginning to conduct big data research applied to practice. These studies represent multiple data sources and settings. Although numerous analytic methods were used, the fundamental issue remains to define the types of analyses consistent with big data analytic methods. There are needs to increase the visibility of big data and data science research conducted by nurse scientists, further examine the use of state of the science in data analytics, and continue to expand the availability and use of a variety of scientific, governmental, and industry data resources. A major implication of this literature review is whether nursing faculty and preparation of future scientists (PhD programs) are prepared for big data and data science. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Priming the Pump for Big Data at Sentara Healthcare.

    PubMed

    Kern, Howard P; Reagin, Michael J; Reese, Bertram S

    2016-01-01

    Today's healthcare organizations are facing significant demands with respect to managing population health, demonstrating value, and accepting risk for clinical outcomes across the continuum of care. The patient's environment outside the walls of the hospital and physician's office-and outside the electronic health record (EHR)-has a substantial impact on clinical care outcomes. The use of big data is key to understanding factors that affect the patient's health status and enhancing clinicians' ability to anticipate how the patient will respond to various therapies. Big data is essential to delivering sustainable, highquality, value-based healthcare, as well as to the success of new models of care such as clinically integrated networks (CINs) and accountable care organizations.Sentara Healthcare, based in Norfolk, Virginia, has been an early adopter of the technologies that have readied us for our big data journey: EHRs, telehealth-supported electronic intensive care units, and telehealth primary care support through MDLIVE. Although we would not say Sentara is at the cutting edge of the big data trend, it certainly is among the fast followers. Use of big data in healthcare is still at an early stage compared with other industries. Tools for data analytics are maturing, but traditional challenges such as heightened data security and limited human resources remain the primary focus for regional health systems to improve care and reduce costs. Sentara primarily makes actionable use of big data in our CIN, Sentara Quality Care Network, and at our health plan, Optima Health. Big data projects can be expensive, and justifying the expense organizationally has often been easier in times of crisis. We have developed an analytics strategic plan separate from but aligned with corporate system goals to ensure optimal investment and management of this essential asset.

  13. Exascale computing and big data

    DOE PAGES

    Reed, Daniel A.; Dongarra, Jack

    2015-06-25

    Scientific discovery and engineering innovation requires unifying traditionally separated high-performance computing and big data analytics. The tools and cultures of high-performance computing and big data analytics have diverged, to the detriment of both; unification is essential to address a spectrum of major research domains. The challenges of scale tax our ability to transmit data, compute complicated functions on that data, or store a substantial part of it; new approaches are required to meet these challenges. Finally, the international nature of science demands further development of advanced computer architectures and global standards for processing data, even as international competition complicates themore » openness of the scientific process.« less

  14. Exascale computing and big data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, Daniel A.; Dongarra, Jack

    Scientific discovery and engineering innovation requires unifying traditionally separated high-performance computing and big data analytics. The tools and cultures of high-performance computing and big data analytics have diverged, to the detriment of both; unification is essential to address a spectrum of major research domains. The challenges of scale tax our ability to transmit data, compute complicated functions on that data, or store a substantial part of it; new approaches are required to meet these challenges. Finally, the international nature of science demands further development of advanced computer architectures and global standards for processing data, even as international competition complicates themore » openness of the scientific process.« less

  15. Metadata mapping and reuse in caBIG.

    PubMed

    Kunz, Isaac; Lin, Ming-Chin; Frey, Lewis

    2009-02-05

    This paper proposes that interoperability across biomedical databases can be improved by utilizing a repository of Common Data Elements (CDEs), UML model class-attributes and simple lexical algorithms to facilitate the building domain models. This is examined in the context of an existing system, the National Cancer Institute (NCI)'s cancer Biomedical Informatics Grid (caBIG). The goal is to demonstrate the deployment of open source tools that can be used to effectively map models and enable the reuse of existing information objects and CDEs in the development of new models for translational research applications. This effort is intended to help developers reuse appropriate CDEs to enable interoperability of their systems when developing within the caBIG framework or other frameworks that use metadata repositories. The Dice (di-grams) and Dynamic algorithms are compared and both algorithms have similar performance matching UML model class-attributes to CDE class object-property pairs. With algorithms used, the baselines for automatically finding the matches are reasonable for the data models examined. It suggests that automatic mapping of UML models and CDEs is feasible within the caBIG framework and potentially any framework that uses a metadata repository. This work opens up the possibility of using mapping algorithms to reduce cost and time required to map local data models to a reference data model such as those used within caBIG. This effort contributes to facilitating the development of interoperable systems within caBIG as well as other metadata frameworks. Such efforts are critical to address the need to develop systems to handle enormous amounts of diverse data that can be leveraged from new biomedical methodologies.

  16. Extending Big-Five Theory into Childhood: A Preliminary Investigation into the Relationship between Big-Five Personality Traits and Behavior Problems in Children.

    ERIC Educational Resources Information Center

    Ehrler, David J.; McGhee, Ron L.; Evans, J. Gary

    1999-01-01

    Investigation conducted to link Big-Five personality traits with behavior problems identified in childhood. Results show distinct patterns of behavior problems associated with various personality characteristics. Preliminary data indicate that identifying Big-Five personality trait patterns may be a useful dimension of assessment for understanding…

  17. Will Big Data Mean the End of Privacy?

    ERIC Educational Resources Information Center

    Pence, Harry E.

    2015-01-01

    Big Data is currently a hot topic in the field of technology, and many campuses are considering the addition of this topic into their undergraduate courses. Big Data tools are not just playing an increasingly important role in many commercial enterprises; they are also combining with new digital devices to dramatically change privacy. This article…

  18. Big Earth Data Initiative: Metadata Improvement: Case Studies

    NASA Technical Reports Server (NTRS)

    Kozimor, John; Habermann, Ted; Farley, John

    2016-01-01

    Big Earth Data Initiative (BEDI) The Big Earth Data Initiative (BEDI) invests in standardizing and optimizing the collection, management and delivery of U.S. Government's civil Earth observation data to improve discovery, access use, and understanding of Earth observations by the broader user community. Complete and consistent standard metadata helps address all three goals.

  19. 36 CFR 7.41 - Big Bend National Park.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Big Bend National Park. 7.41 Section 7.41 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.41 Big Bend National Park. (a) Fishing; closed waters...

  20. 36 CFR 7.41 - Big Bend National Park.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Big Bend National Park. 7.41 Section 7.41 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.41 Big Bend National Park. (a) Fishing; closed waters...

  1. 36 CFR 7.41 - Big Bend National Park.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Big Bend National Park. 7.41 Section 7.41 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.41 Big Bend National Park. (a) Fishing; closed waters...

  2. 36 CFR 7.41 - Big Bend National Park.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Big Bend National Park. 7.41 Section 7.41 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.41 Big Bend National Park. (a) Fishing; closed waters...

  3. 36 CFR 7.41 - Big Bend National Park.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Big Bend National Park. 7.41 Section 7.41 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.41 Big Bend National Park. (a) Fishing; closed waters...

  4. Nuclear constraints on the age of the universe

    NASA Technical Reports Server (NTRS)

    Schramm, D. N.

    1983-01-01

    A review is made of how one can use nuclear physics to put rather stringent limits on the age of the universe and thus the cosmic distance scale. The age can be estimated to a fair degree of accuracy. No single measurement of the time since the Big Bang gives a specific, unambiguous age. There are several methods that together fix the age with surprising precision. In particular, there are three totally independent techniques for estimating an age and a fourth technique which involves finding consistency of the other three in the framework of the standard Big Bang cosmological model. The three independent methods are: cosmological dynamics, the age of the oldest stars, and radioactive dating. This paper concentrates on the third of the three methods, and the consistency technique. Previously announced in STAR as N83-34868

  5. The little sibling of the big rip singularity

    NASA Astrophysics Data System (ADS)

    Bouhmadi-López, Mariam; Errahmani, Ahmed; Martín-Moruno, Prado; Ouali, Taoufik; Tavakoli, Yaser

    2015-07-01

    In this paper, we present a new cosmological event, which we named the little sibling of the big rip. This event is much smoother than the big rip singularity. When the little sibling of the big rip is reached, the Hubble rate and the scale factor blow up, but the cosmic derivative of the Hubble rate does not. This abrupt event takes place at an infinite cosmic time where the scalar curvature explodes. We show that a doomsday à la little sibling of the big rip is compatible with an accelerating universe, indeed at present it would mimic perfectly a ΛCDM scenario. It turns out that, even though the event seems to be harmless as it takes place in the infinite future, the bound structures in the universe would be unavoidably destroyed on a finite cosmic time from now. The model can be motivated by considering that the weak energy condition should not be strongly violated in our universe, and it could give us some hints about the status of recently formulated nonlinear energy conditions.

  6. Differential Privacy Preserving in Big Data Analytics for Connected Health.

    PubMed

    Lin, Chi; Song, Zihao; Song, Houbing; Zhou, Yanhong; Wang, Yi; Wu, Guowei

    2016-04-01

    In Body Area Networks (BANs), big data collected by wearable sensors usually contain sensitive information, which is compulsory to be appropriately protected. Previous methods neglected privacy protection issue, leading to privacy exposure. In this paper, a differential privacy protection scheme for big data in body sensor network is developed. Compared with previous methods, this scheme will provide privacy protection with higher availability and reliability. We introduce the concept of dynamic noise thresholds, which makes our scheme more suitable to process big data. Experimental results demonstrate that, even when the attacker has full background knowledge, the proposed scheme can still provide enough interference to big sensitive data so as to preserve the privacy.

  7. Breaking BAD: A Data Serving Vision for Big Active Data

    PubMed Central

    Carey, Michael J.; Jacobs, Steven; Tsotras, Vassilis J.

    2017-01-01

    Virtually all of today’s Big Data systems are passive in nature. Here we describe a project to shift Big Data platforms from passive to active. We detail a vision for a scalable system that can continuously and reliably capture Big Data to enable timely and automatic delivery of new information to a large pool of interested users as well as supporting analyses of historical information. We are currently building a Big Active Data (BAD) system by extending an existing scalable open-source BDMS (AsterixDB) in this active direction. This first paper zooms in on the Data Serving piece of the BAD puzzle, including its key concepts and user model. PMID:29034377

  8. Using Big (and Critical) Data to Unmask Inequities in Community Colleges

    ERIC Educational Resources Information Center

    Rios-Aguilar, Cecilia

    2014-01-01

    This chapter presents various definitions of big data and examines some of the assumptions regarding the value and power of big data, especially as it relates to issues of equity in community colleges. Finally, this chapter ends with a discussion of the opportunities and challenges of using big data, critically, for institutional researchers.

  9. 50 CFR 86.11 - What does the national BIG Program do?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false What does the national BIG Program do? 86.11 Section 86.11 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE... GRANT (BIG) PROGRAM General Information About the Grant Program § 86.11 What does the national BIG...

  10. Data Confidentiality Challenges in Big Data Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yin, Jian; Zhao, Dongfang

    In this paper, we address the problem of data confidentiality in big data analytics. In many fields, much useful patterns can be extracted by applying machine learning techniques to big data. However, data confidentiality must be protected. In many scenarios, data confidentiality could well be a prerequisite for data to be shared. We present a scheme to provide provable secure data confidentiality and discuss various techniques to optimize performance of such a system.

  11. Plants of the Big Cypress National Preserve, Florida

    USGS Publications Warehouse

    Muss, J.D.; Austin, D.F.; Snyder, J.R.

    2003-01-01

    A new survey of the Big Cypress National Preserve shows that the vascular flora consists of 145 families and 851 species. Of these, 72 are listed by the State of Florida as endangered or threatened plants, while many others are on the margins of their ranges. The survey also shows 158 species of exotic plants within the Preserve, some of which imperil native species by competing with them. Finally, we compare the flora of the Big Cypress National Preserve with those of the nearby Fakahatchee Strand State Preserve and the Everglades National Park. Although Big Cypress is less than half the size of Everglades National Park, it has 90% of the native species richness (693 vs. 772).

  12. The Universal R-Matrix for the Jordanian Deformation of sl(2), and the Contracted Forms of so(4)

    NASA Astrophysics Data System (ADS)

    Shariati, A.; Aghamohammadi, A.; Khorrami, M.

    We introduce a universal R-matrix for the Jordanian deformation of U(sl(2)). Using Uh(so(4))=Uh(sl(2)) ⊕ U-h(sl(2)), we obtain the universal R-matrix for Uh(so(4)). Applying the graded contractions on the universal R-matrix of Uh(so(4)), we show that there exist three distinct R-matrices for all the contracted algebras. It is shown that Uh(sl(2)), Uh(so(4)), and all of these contracted algebras are triangular.

  13. Adapting bioinformatics curricula for big data.

    PubMed

    Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. © The Author 2015. Published by Oxford University Press.

  14. Clinical research of traditional Chinese medicine in big data era.

    PubMed

    Zhang, Junhua; Zhang, Boli

    2014-09-01

    With the advent of big data era, our thinking, technology and methodology are being transformed. Data-intensive scientific discovery based on big data, named "The Fourth Paradigm," has become a new paradigm of scientific research. Along with the development and application of the Internet information technology in the field of healthcare, individual health records, clinical data of diagnosis and treatment, and genomic data have been accumulated dramatically, which generates big data in medical field for clinical research and assessment. With the support of big data, the defects and weakness may be overcome in the methodology of the conventional clinical evaluation based on sampling. Our research target shifts from the "causality inference" to "correlativity analysis." This not only facilitates the evaluation of individualized treatment, disease prediction, prevention and prognosis, but also is suitable for the practice of preventive healthcare and symptom pattern differentiation for treatment in terms of traditional Chinese medicine (TCM), and for the post-marketing evaluation of Chinese patent medicines. To conduct clinical studies involved in big data in TCM domain, top level design is needed and should be performed orderly. The fundamental construction and innovation studies should be strengthened in the sections of data platform creation, data analysis technology and big-data professionals fostering and training.

  15. SETI as a part of Big History

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2014-08-01

    Big History is an emerging academic discipline which examines history scientifically from the Big Bang to the present. It uses a multidisciplinary approach based on combining numerous disciplines from science and the humanities, and explores human existence in the context of this bigger picture. It is taught at some universities. In a series of recent papers ([11] through [15] and [17] through [18]) and in a book [16], we developed a new mathematical model embracing Darwinian Evolution (RNA to Humans, see, in particular, [17] and Human History (Aztecs to USA, see [16]) and then we extrapolated even that into the future up to ten million years (see 18), the minimum time requested for a civilization to expand to the whole Milky Way (Fermi paradox). In this paper, we further extend that model in the past so as to let it start at the Big Bang (13.8 billion years ago) thus merging Big History, Evolution on Earth and SETI (the modern Search for ExtraTerrestrial Intelligence) into a single body of knowledge of a statistical type. Our idea is that the Geometric Brownian Motion (GBM), so far used as the key stochastic process of financial mathematics (Black-Sholes models and related 1997 Nobel Prize in Economics!) may be successfully applied to the whole of Big History. In particular, in this paper we derive Big History Theory based on GBMs: just as the GBM is the “movie” unfolding in time, so the Statistical Drake Equation is its “still picture”, static in time, and the GBM is the time-extension of the Drake Equation. Darwinian Evolution on Earth may be easily described as an increasing GBM in the number of living species on Earth over the last 3.5 billion years. The first of them was RNA 3.5 billion years ago, and now 50

  16. Neutrino energy transport in weak decoupling and big bang nucleosynthesis

    DOE PAGES

    Grohs, Evan Bradley; Paris, Mark W.; Kishimoto, Chad T.; ...

    2016-04-21

    In this study, we calculate the evolution of the early universe through the epochs of weak decoupling, weak freeze-out and big bang nucleosynthesis (BBN) by simultaneously coupling a full strong, electromagnetic, and weak nuclear reaction network with a multienergy group Boltzmann neutrino energy transport scheme. The modular structure of our code provides the ability to dissect the relative contributions of each process responsible for evolving the dynamics of the early universe in the absence of neutrino flavor oscillations. Such an approach allows a detailed accounting of the evolution of the νe, ν¯e, νμ, ν¯μ, ντ, ν¯τ energy distribution functions alongsidemore » and self-consistently with the nuclear reactions and entropy/heat generation and flow between the neutrino and photon/electron/positron/baryon plasma components. This calculation reveals nonlinear feedback in the time evolution of neutrino distribution functions and plasma thermodynamic conditions (e.g., electron-positron pair densities), with implications for the phasing between scale factor and plasma temperature; the neutron-to-proton ratio; light-element abundance histories; and the cosmological parameter N eff. We find that our approach of following the time development of neutrino spectral distortions and concomitant entropy production and extraction from the plasma results in changes in the computed value of the BBN deuterium yield. For example, for particular implementations of quantum corrections in plasma thermodynamics, our calculations show a 0.4% increase in deuterium. These changes are potentially significant in the context of anticipated improvements in observational and nuclear physics uncertainties.« less

  17. Neutrino energy transport in weak decoupling and big bang nucleosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grohs, Evan Bradley; Paris, Mark W.; Kishimoto, Chad T.

    In this study, we calculate the evolution of the early universe through the epochs of weak decoupling, weak freeze-out and big bang nucleosynthesis (BBN) by simultaneously coupling a full strong, electromagnetic, and weak nuclear reaction network with a multienergy group Boltzmann neutrino energy transport scheme. The modular structure of our code provides the ability to dissect the relative contributions of each process responsible for evolving the dynamics of the early universe in the absence of neutrino flavor oscillations. Such an approach allows a detailed accounting of the evolution of the νe, ν¯e, νμ, ν¯μ, ντ, ν¯τ energy distribution functions alongsidemore » and self-consistently with the nuclear reactions and entropy/heat generation and flow between the neutrino and photon/electron/positron/baryon plasma components. This calculation reveals nonlinear feedback in the time evolution of neutrino distribution functions and plasma thermodynamic conditions (e.g., electron-positron pair densities), with implications for the phasing between scale factor and plasma temperature; the neutron-to-proton ratio; light-element abundance histories; and the cosmological parameter N eff. We find that our approach of following the time development of neutrino spectral distortions and concomitant entropy production and extraction from the plasma results in changes in the computed value of the BBN deuterium yield. For example, for particular implementations of quantum corrections in plasma thermodynamics, our calculations show a 0.4% increase in deuterium. These changes are potentially significant in the context of anticipated improvements in observational and nuclear physics uncertainties.« less

  18. Research on Implementing Big Data: Technology, People, & Processes

    ERIC Educational Resources Information Center

    Rankin, Jenny Grant; Johnson, Margie; Dennis, Randall

    2015-01-01

    When many people hear the term "big data", they primarily think of a technology tool for the collection and reporting of data of high variety, volume, and velocity. However, the complexity of big data is not only the technology, but the supporting processes, policies, and people supporting it. This paper was written by three experts to…

  19. The Ethics of Big Data and Nursing Science.

    PubMed

    Milton, Constance L

    2017-10-01

    Big data is a scientific, social, and technological trend referring to the process and size of datasets available for analysis. Ethical implications arise as healthcare disciplines, including nursing, struggle over questions of informed consent, privacy, ownership of data, and its possible use in epistemology. The author offers straight-thinking possibilities for the use of big data in nursing science.

  20. A peek into the future of radiology using big data applications

    PubMed Central

    Kharat, Amit T.; Singhal, Shubham

    2017-01-01

    Big data is extremely large amount of data which is available in the radiology department. Big data is identified by four Vs – Volume, Velocity, Variety, and Veracity. By applying different algorithmic tools and converting raw data to transformed data in such large datasets, there is a possibility of understanding and using radiology data for gaining new knowledge and insights. Big data analytics consists of 6Cs – Connection, Cloud, Cyber, Content, Community, and Customization. The global technological prowess and per-capita capacity to save digital information has roughly doubled every 40 months since the 1980's. By using big data, the planning and implementation of radiological procedures in radiology departments can be given a great boost. Potential applications of big data in the future are scheduling of scans, creating patient-specific personalized scanning protocols, radiologist decision support, emergency reporting, virtual quality assurance for the radiologist, etc. Targeted use of big data applications can be done for images by supporting the analytic process. Screening software tools designed on big data can be used to highlight a region of interest, such as subtle changes in parenchymal density, solitary pulmonary nodule, or focal hepatic lesions, by plotting its multidimensional anatomy. Following this, we can run more complex applications such as three-dimensional multi planar reconstructions (MPR), volumetric rendering (VR), and curved planar reconstruction, which consume higher system resources on targeted data subsets rather than querying the complete cross-sectional imaging dataset. This pre-emptive selection of dataset can substantially reduce the system requirements such as system memory, server load and provide prompt results. However, a word of caution, “big data should not become “dump data” due to inadequate and poor analysis and non-structured improperly stored data. In the near future, big data can ring in the era of personalized

  1. A peek into the future of radiology using big data applications.

    PubMed

    Kharat, Amit T; Singhal, Shubham

    2017-01-01

    Big data is extremely large amount of data which is available in the radiology department. Big data is identified by four Vs - Volume, Velocity, Variety, and Veracity. By applying different algorithmic tools and converting raw data to transformed data in such large datasets, there is a possibility of understanding and using radiology data for gaining new knowledge and insights. Big data analytics consists of 6Cs - Connection, Cloud, Cyber, Content, Community, and Customization. The global technological prowess and per-capita capacity to save digital information has roughly doubled every 40 months since the 1980's. By using big data, the planning and implementation of radiological procedures in radiology departments can be given a great boost. Potential applications of big data in the future are scheduling of scans, creating patient-specific personalized scanning protocols, radiologist decision support, emergency reporting, virtual quality assurance for the radiologist, etc. Targeted use of big data applications can be done for images by supporting the analytic process. Screening software tools designed on big data can be used to highlight a region of interest, such as subtle changes in parenchymal density, solitary pulmonary nodule, or focal hepatic lesions, by plotting its multidimensional anatomy. Following this, we can run more complex applications such as three-dimensional multi planar reconstructions (MPR), volumetric rendering (VR), and curved planar reconstruction, which consume higher system resources on targeted data subsets rather than querying the complete cross-sectional imaging dataset. This pre-emptive selection of dataset can substantially reduce the system requirements such as system memory, server load and provide prompt results. However, a word of caution, "big data should not become "dump data" due to inadequate and poor analysis and non-structured improperly stored data. In the near future, big data can ring in the era of personalized and

  2. Technical challenges for big data in biomedicine and health: data sources, infrastructure, and analytics.

    PubMed

    Peek, N; Holmes, J H; Sun, J

    2014-08-15

    To review technical and methodological challenges for big data research in biomedicine and health. We discuss sources of big datasets, survey infrastructures for big data storage and big data processing, and describe the main challenges that arise when analyzing big data. The life and biomedical sciences are massively contributing to the big data revolution through secondary use of data that were collected during routine care and through new data sources such as social media. Efficient processing of big datasets is typically achieved by distributing computation over a cluster of computers. Data analysts should be aware of pitfalls related to big data such as bias in routine care data and the risk of false-positive findings in high-dimensional datasets. The major challenge for the near future is to transform analytical methods that are used in the biomedical and health domain, to fit the distributed storage and processing model that is required to handle big data, while ensuring confidentiality of the data being analyzed.

  3. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach

    PubMed Central

    Cheung, Mike W.-L.; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists—and probably the most crucial one—is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study. PMID:27242639

  4. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach.

    PubMed

    Cheung, Mike W-L; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists-and probably the most crucial one-is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  5. Big Crater as Viewed by Pathfinder Lander

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The 'Big Crater' is actually a relatively small Martian crater to the southeast of the Mars Pathfinder landing site. It is 1500 meters (4900 feet) in diameter, or about the same size as Meteor Crater in Arizona. Superimposed on the rim of Big Crater (the central part of the rim as seen here) is a smaller crater nicknamed 'Rimshot Crater.' The distance to this smaller crater, and the nearest portion of the rim of Big Crater, is 2200 meters (7200 feet). To the right of Big Crater, south from the spacecraft, almost lost in the atmospheric dust 'haze,' is the large streamlined mountain nicknamed 'Far Knob.' This mountain is over 450 meters (1480 feet) tall, and is over 30 kilometers (19 miles) from the spacecraft. Another, smaller and closer knob, nicknamed 'Southeast Knob' can be seen as a triangular peak to the left of the flanks of the Big Crater rim. This knob is 21 kilometers (13 miles) southeast from the spacecraft.

    The larger features visible in this scene - Big Crater, Far Knob, and Southeast Knob - were discovered on the first panoramas taken by the IMP camera on the 4th of July, 1997, and subsequently identified in Viking Orbiter images taken over 20 years ago. The scene includes rocky ridges and swales or 'hummocks' of flood debris that range from a few tens of meters away from the lander to the distance of South Twin Peak. The largest rock in the nearfield, just left of center in the foreground, nicknamed 'Otter', is about 1.5 meters (4.9 feet) long and 10 meters (33 feet) from the spacecraft.

    This view of Big Crater was produced by combining 6 individual 'Superpan' scenes from the left and right eyes of the IMP camera. Each frame consists of 8 individual frames (left eye) and 7 frames (right eye) taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be.

    Mars Pathfinder is the second in NASA

  6. Scalable nuclear density functional theory with Sky3D

    NASA Astrophysics Data System (ADS)

    Afibuzzaman, Md; Schuetrumpf, Bastian; Aktulga, Hasan Metin

    2018-02-01

    In nuclear astrophysics, quantum simulations of large inhomogeneous dense systems as they appear in the crusts of neutron stars present big challenges. The number of particles in a simulation with periodic boundary conditions is strongly limited due to the immense computational cost of the quantum methods. In this paper, we describe techniques for an efficient and scalable parallel implementation of Sky3D, a nuclear density functional theory solver that operates on an equidistant grid. Presented techniques allow Sky3D to achieve good scaling and high performance on a large number of cores, as demonstrated through detailed performance analysis on a Cray XC40 supercomputer.

  7. Scalability and Validation of Big Data Bioinformatics Software.

    PubMed

    Yang, Andrian; Troup, Michael; Ho, Joshua W K

    2017-01-01

    This review examines two important aspects that are central to modern big data bioinformatics analysis - software scalability and validity. We argue that not only are the issues of scalability and validation common to all big data bioinformatics analyses, they can be tackled by conceptually related methodological approaches, namely divide-and-conquer (scalability) and multiple executions (validation). Scalability is defined as the ability for a program to scale based on workload. It has always been an important consideration when developing bioinformatics algorithms and programs. Nonetheless the surge of volume and variety of biological and biomedical data has posed new challenges. We discuss how modern cloud computing and big data programming frameworks such as MapReduce and Spark are being used to effectively implement divide-and-conquer in a distributed computing environment. Validation of software is another important issue in big data bioinformatics that is often ignored. Software validation is the process of determining whether the program under test fulfils the task for which it was designed. Determining the correctness of the computational output of big data bioinformatics software is especially difficult due to the large input space and complex algorithms involved. We discuss how state-of-the-art software testing techniques that are based on the idea of multiple executions, such as metamorphic testing, can be used to implement an effective bioinformatics quality assurance strategy. We hope this review will raise awareness of these critical issues in bioinformatics.

  8. The National Institutes of Health's Big Data to Knowledge (BD2K) initiative: capitalizing on biomedical big data.

    PubMed

    Margolis, Ronald; Derr, Leslie; Dunn, Michelle; Huerta, Michael; Larkin, Jennie; Sheehan, Jerry; Guyer, Mark; Green, Eric D

    2014-01-01

    Biomedical research has and will continue to generate large amounts of data (termed 'big data') in many formats and at all levels. Consequently, there is an increasing need to better understand and mine the data to further knowledge and foster new discovery. The National Institutes of Health (NIH) has initiated a Big Data to Knowledge (BD2K) initiative to maximize the use of biomedical big data. BD2K seeks to better define how to extract value from the data, both for the individual investigator and the overall research community, create the analytic tools needed to enhance utility of the data, provide the next generation of trained personnel, and develop data science concepts and tools that can be made available to all stakeholders. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. Teaching Information & Technology Skills: The Big6[TM] in Secondary Schools.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.; Berkowitz, Robert E.

    This companion volume to a previous work focusing on the Big6 Approach in elementary schools provides secondary school classroom teachers, teacher-librarians, and technology teachers with the background and tools necessary to implement an integrated Big6 program. The first part of this book explains the Big6 approach and the rationale behind it.…

  10. The Big Fish

    ERIC Educational Resources Information Center

    DeLisle, Rebecca; Hargis, Jace

    2005-01-01

    The Killer Whale, Shamu jumps through hoops and splashes tourists in hopes for the big fish, not because of passion, desire or simply the enjoyment of doing so. What would happen if those fish were obsolete? Would this killer whale be able to find the passion to continue to entertain people? Or would Shamu find other exciting activities to do…

  11. Physical properties of superbulky lanthanide metallocenes: synthesis and extraordinary luminescence of [Eu(II)(Cp(BIG))2] (Cp(BIG) = (4-nBu-C6H4)5-cyclopentadienyl).

    PubMed

    Harder, Sjoerd; Naglav, Dominik; Ruspic, Christian; Wickleder, Claudia; Adlung, Matthias; Hermes, Wilfried; Eul, Matthias; Pöttgen, Rainer; Rego, Daniel B; Poineau, Frederic; Czerwinski, Kenneth R; Herber, Rolfe H; Nowik, Israel

    2013-09-09

    The superbulky deca-aryleuropocene [Eu(Cp(BIG))2], Cp(BIG) = (4-nBu-C6H4)5-cyclopentadienyl, was prepared by reaction of [Eu(dmat)2(thf)2], DMAT = 2-Me2N-α-Me3Si-benzyl, with two equivalents of Cp(BIG)H. Recrystallizyation from cold hexane gave the product with a surprisingly bright and efficient orange emission (45% quantum yield). The crystal structure is isomorphic to those of [M(Cp(BIG))2] (M = Sm, Yb, Ca, Ba) and shows the typical distortions that arise from Cp(BIG)⋅⋅⋅Cp(BIG) attraction as well as excessively large displacement parameter for the heavy Eu atom (U(eq) = 0.075). In order to gain information on the true oxidation state of the central metal in superbulky metallocenes [M(Cp(BIG))2] (M = Sm, Eu, Yb), several physical analyses have been applied. Temperature-dependent magnetic susceptibility data of [Yb(Cp(BIG))2] show diamagnetism, indicating stable divalent ytterbium. Temperature-dependent (151)Eu Mössbauer effect spectroscopic examination of [Eu(Cp(BIG))2] was examined over the temperature range 93-215 K and the hyperfine and dynamical properties of the Eu(II) species are discussed in detail. The mean square amplitude of vibration of the Eu atom as a function of temperature was determined and compared to the value extracted from the single-crystal X-ray data at 203 K. The large difference in these two values was ascribed to the presence of static disorder and/or the presence of low-frequency torsional and librational modes in [Eu(Cp(BIG))2]. X-ray absorbance near edge spectroscopy (XANES) showed that all three [Ln(Cp(BIG))2] (Ln = Sm, Eu, Yb) compounds are divalent. The XANES white-line spectra are at 8.3, 7.3, and 7.8 eV, for Sm, Eu, and Yb, respectively, lower than the Ln2O3 standards. No XANES temperature dependence was found from room temperature to 100 K. XANES also showed that the [Ln(Cp(BIG))2] complexes had less trivalent impurity than a [EuI2(thf)x] standard. The complex [Eu(Cp(BIG))2] shows already at room temperature

  12. Business and Science - Big Data, Big Picture

    NASA Astrophysics Data System (ADS)

    Rosati, A.

    2013-12-01

    Data Science is more than the creation, manipulation, and transformation of data. It is more than Big Data. The business world seems to have a hold on the term 'data science' and, for now, they define what it means. But business is very different than science. In this talk, I address how large datasets, Big Data, and data science are conceptually different in business and science worlds. I focus on the types of questions each realm asks, the data needed, and the consequences of findings. Gone are the days of datasets being created or collected to serve only one purpose or project. The trick with data reuse is to become familiar enough with a dataset to be able to combine it with other data and extract accurate results. As a Data Curator for the Advanced Cooperative Arctic Data and Information Service (ACADIS), my specialty is communication. Our team enables Arctic sciences by ensuring datasets are well documented and can be understood by reusers. Previously, I served as a data community liaison for the North American Regional Climate Change Assessment Program (NARCCAP). Again, my specialty was communicating complex instructions and ideas to a broad audience of data users. Before entering the science world, I was an entrepreneur. I have a bachelor's degree in economics and a master's degree in environmental social science. I am currently pursuing a Ph.D. in Geography. Because my background has embraced both the business and science worlds, I would like to share my perspectives on data, data reuse, data documentation, and the presentation or communication of findings. My experiences show that each can inform and support the other.

  13. Unsupervised Tensor Mining for Big Data Practitioners.

    PubMed

    Papalexakis, Evangelos E; Faloutsos, Christos

    2016-09-01

    Multiaspect data are ubiquitous in modern Big Data applications. For instance, different aspects of a social network are the different types of communication between people, the time stamp of each interaction, and the location associated to each individual. How can we jointly model all those aspects and leverage the additional information that they introduce to our analysis? Tensors, which are multidimensional extensions of matrices, are a principled and mathematically sound way of modeling such multiaspect data. In this article, our goal is to popularize tensors and tensor decompositions to Big Data practitioners by demonstrating their effectiveness, outlining challenges that pertain to their application in Big Data scenarios, and presenting our recent work that tackles those challenges. We view this work as a step toward a fully automated, unsupervised tensor mining tool that can be easily and broadly adopted by practitioners in academia and industry.

  14. [Big data analysis and evidence-based medicine: controversy or cooperation].

    PubMed

    Chen, Xinzu; Hu, Jiankun

    2016-01-01

    The development of evidence-based medicince should be an important milestone from the empirical medicine to the evidence-driving modern medicine. With the outbreak in biomedical data, the rising big data analysis can efficiently solve exploratory questions or decision-making issues in biomedicine and healthcare activities. The current problem in China is that big data analysis is still not well conducted and applied to deal with problems such as clinical decision-making, public health policy, and should not be a debate whether big data analysis can replace evidence-based medicine or not. Therefore, we should clearly understand, no matter whether evidence-based medicine or big data analysis, the most critical infrastructure must be the substantial work in the design, constructure and collection of original database in China.

  15. Circulating big endothelin-1: an active role in pulmonary thromboendarterectomy?

    PubMed

    Langer, Frank; Bauer, Michael; Tscholl, Dietmar; Schramm, Rene; Kunihara, Takashi; Lausberg, Henning; Georg, Thomas; Wilkens, Heinrike; Schäfers, Hans-Joachim

    2005-11-01

    Pulmonary thromboendarterectomy is an effective treatment for patients with chronic thromboembolic pulmonary hypertension. The early postoperative course may be associated with pulmonary vasoconstriction and profound systemic vasodilation. We investigated the potential involvement of endothelins in these hemodynamic alterations. Seventeen patients with chronic thromboembolic pulmonary hypertension (pulmonary vascular resistance, 1015 +/- 402 dyne x s x cm(-5) [mean +/- SD]) underwent pulmonary thromboendarterectomy with cardiopulmonary bypass and deep hypothermic circulatory arrest. Peripheral arterial blood samples were drawn before sternotomy, during cardiopulmonary bypass before and after deep hypothermic circulatory arrest, and 0, 8, 16, and 24 hours after surgery and were analyzed for big endothelin-1. The patients were divided into 2 groups according to whether their preoperative big endothelin-1 plasma level was above or below the cutoff point of 2.1 pg/mL, as determined by receiver operating characteristic curve analysis (group A, big endothelin-1 <2.1 pg/mL, n = 8; group B, big endothelin-1 > or =2.1 pg/mL, n = 9). Patients in group B, with higher preoperative big endothelin-1 levels (3.2 +/- 1.0 pg/mL vs 1.5 +/- 0.4 pg/mL; P < .001), were poorer operative candidates (preoperative mean pulmonary artery pressure, 51.3 +/- 7.1 mm Hg vs 43.6 +/- 6.2 mm Hg; P = .006) and had a poorer outcome (mean pulmonary artery pressure 24 hours after surgery, 32.6 +/- 9.5 mm Hg vs 21.8 +/- 6.2 mm Hg; P < .001). Positive correlations were found between preoperative big endothelin-1 levels and preoperative mean pulmonary artery pressure (r = 0.56; P = .02) as well as postoperative mean pulmonary artery pressure at 0 hours (r = 0.70; P = .002) and 24 hours (r = 0.63; P = .006) after surgery. Preoperative big endothelin-1 levels predicted outcome (postoperative mean pulmonary artery pressure at 24 hours after surgery) after pulmonary thromboendarterectomy (area under the

  16. Big data and ophthalmic research.

    PubMed

    Clark, Antony; Ng, Jonathon Q; Morlet, Nigel; Semmens, James B

    2016-01-01

    Large population-based health administrative databases, clinical registries, and data linkage systems are a rapidly expanding resource for health research. Ophthalmic research has benefited from the use of these databases in expanding the breadth of knowledge in areas such as disease surveillance, disease etiology, health services utilization, and health outcomes. Furthermore, the quantity of data available for research has increased exponentially in recent times, particularly as e-health initiatives come online in health systems across the globe. We review some big data concepts, the databases and data linkage systems used in eye research-including their advantages and limitations, the types of studies previously undertaken, and the future direction for big data in eye research. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Big I (I-40/I-25) reconstruction & ITS infrastructure.

    DOT National Transportation Integrated Search

    2010-04-20

    The New Mexico Department of Transportation (NMDOT) rebuilt the Big I interchange in Albuquerque to make it safer and more efficient and to provide better access. The Big I is where the Coronado Interstate (I-40) and the Pan American Freeway (I-25) i...

  18. The Challenge of Handling Big Data Sets in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Autermann, Christian; Stasch, Christoph; Jirka, Simon

    2016-04-01

    More and more Sensor Web components are deployed in different domains such as hydrology, oceanography or air quality in order to make observation data accessible via the Web. However, besides variability of data formats and protocols in environmental applications, the fast growing volume of data with high temporal and spatial resolution is imposing new challenges for Sensor Web technologies when sharing observation data and metadata about sensors. Variability, volume and velocity are the core issues that are addressed by Big Data concepts and technologies. Most solutions in the geospatial sector focus on remote sensing and raster data, whereas big in-situ observation data sets relying on vector features require novel approaches. Hence, in order to deal with big data sets in infrastructures for observational data, the following questions need to be answered: 1. How can big heterogeneous spatio-temporal datasets be organized, managed, and provided to Sensor Web applications? 2. How can views on big data sets and derived information products be made accessible in the Sensor Web? 3. How can big observation data sets be processed efficiently? We illustrate these challenges with examples from the marine domain and outline how we address these challenges. We therefore show how big data approaches from mainstream IT can be re-used and applied to Sensor Web application scenarios.

  19. What's the Big Idea? Seeking to Top Apollo

    NASA Technical Reports Server (NTRS)

    Sherwood, Brent

    2012-01-01

    Human space flight has struggled to find its soul since Apollo. The astounding achievements of human space programs over the 40 years since Apollo have failed to be as iconic or central to society as in the 1960s. The paper proffers a way human space flight could again be associated with a societal Big Idea. It describes eight societal factors that have irrevocably changed since Apollo; then analyzes eight other factors that a forward HSF Big Idea would have to fit. The paper closes by assessing the four principal options for HSF futures against those eight factors. Robotic and human industrialization of geosynchronous orbit to provide unlimited, sustainable electrical power to Earth is found to be the best candidate for the next Big Idea.

  20. The effect of big endothelin-1 in the proximal tubule of the rat kidney

    PubMed Central

    Beara-Lasić, Lada; Knotek, Mladen; Čejvan, Kenan; Jakšić, Ozren; Lasić, Zoran; Skorić, Boško; Brkljačić, Vera; Banfić, Hrvoje

    1997-01-01

    An obligatory step in the biosynthesis of endothelin-1 (ET-1) is the conversion of its inactive precursor, big ET-1, into the mature form by the action of specific, phosphoramidon-sensitive, endothelin converting enzyme(s) (ECE). Disparate effects of big ET-1 and ET-1 on renal tubule function suggest that big ET-1 might directly influence renal tubule function. Therefore, the role of the enzymatic conversion of big ET-1 into ET-1 in eliciting the functional response (generation of 1,2-diacylglycerol) to big ET-1 was studied in the rat proximal tubules.In renal cortical slices incubated with big ET-1, pretreatment with phosphoramidon (an ECE inhibitor) reduced tissue immunoreactive ET-1 to a level similar to that of cortical tissue not exposed to big ET-1. This confirms the presence and effectiveness of ECE inhibition by phosphoramidon.In freshly isolated proximal tubule cells, big ET-1 stimulated the generation of 1,2-diacylglycerol (DAG) in a time- and dose-dependent manner. Neither phosphoramidon nor chymostatin, a chymase inhibitor, influenced the generation of DAG evoked by big ET-1.Big ET-1-dependent synthesis of DAG was found in the brush-border membrane. It was unaffected by BQ123, an ETA receptor antagonist, but was blocked by bosentan, an ETA,B-nonselective endothelin receptor antagonist.These results suggest that the proximal tubule is a site for the direct effect of big ET-1 in the rat kidney. The effect of big ET-1 is confined to the brush-border membrane of the proximal tubule, which may be the site of big ET-1-sensitive receptors. PMID:9051300

  1. Introducing the Big Knowledge to Use (BK2U) challenge.

    PubMed

    Perl, Yehoshua; Geller, James; Halper, Michael; Ochs, Christopher; Zheng, Ling; Kapusnik-Uner, Joan

    2017-01-01

    The purpose of the Big Data to Knowledge initiative is to develop methods for discovering new knowledge from large amounts of data. However, if the resulting knowledge is so large that it resists comprehension, referred to here as Big Knowledge (BK), how can it be used properly and creatively? We call this secondary challenge, Big Knowledge to Use. Without a high-level mental representation of the kinds of knowledge in a BK knowledgebase, effective or innovative use of the knowledge may be limited. We describe summarization and visualization techniques that capture the big picture of a BK knowledgebase, possibly created from Big Data. In this research, we distinguish between assertion BK and rule-based BK (rule BK) and demonstrate the usefulness of summarization and visualization techniques of assertion BK for clinical phenotyping. As an example, we illustrate how a summary of many intracranial bleeding concepts can improve phenotyping, compared to the traditional approach. We also demonstrate the usefulness of summarization and visualization techniques of rule BK for drug-drug interaction discovery. © 2016 New York Academy of Sciences.

  2. Introducing the Big Knowledge to Use (BK2U) challenge

    PubMed Central

    Perl, Yehoshua; Geller, James; Halper, Michael; Ochs, Christopher; Zheng, Ling; Kapusnik-Uner, Joan

    2016-01-01

    The purpose of the Big Data to Knowledge (BD2K) initiative is to develop methods for discovering new knowledge from large amounts of data. However, if the resulting knowledge is so large that it resists comprehension, referred to here as Big Knowledge (BK), how can it be used properly and creatively? We call this secondary challenge, Big Knowledge to Use (BK2U). Without a high-level mental representation of the kinds of knowledge in a BK knowledgebase, effective or innovative use of the knowledge may be limited. We describe summarization and visualization techniques that capture the big picture of a BK knowledgebase, possibly created from Big Data. In this research, we distinguish between assertion BK and rule-based BK and demonstrate the usefulness of summarization and visualization techniques of assertion BK for clinical phenotyping. As an example, we illustrate how a summary of many intracranial bleeding concepts can improve phenotyping, compared to the traditional approach. We also demonstrate the usefulness of summarization and visualization techniques of rule-based BK for drug–drug interaction discovery. PMID:27750400

  3. Big data and visual analytics in anaesthesia and health care.

    PubMed

    Simpao, A F; Ahumada, L M; Rehman, M A

    2015-09-01

    Advances in computer technology, patient monitoring systems, and electronic health record systems have enabled rapid accumulation of patient data in electronic form (i.e. big data). Organizations such as the Anesthesia Quality Institute and Multicenter Perioperative Outcomes Group have spearheaded large-scale efforts to collect anaesthesia big data for outcomes research and quality improvement. Analytics--the systematic use of data combined with quantitative and qualitative analysis to make decisions--can be applied to big data for quality and performance improvements, such as predictive risk assessment, clinical decision support, and resource management. Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces, and it can facilitate performance of cognitive activities involving big data. Ongoing integration of big data and analytics within anaesthesia and health care will increase demand for anaesthesia professionals who are well versed in both the medical and the information sciences. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Gender Differences in Personality across the Ten Aspects of the Big Five.

    PubMed

    Weisberg, Yanna J; Deyoung, Colin G; Hirsh, Jacob B

    2011-01-01

    This paper investigates gender differences in personality traits, both at the level of the Big Five and at the sublevel of two aspects within each Big Five domain. Replicating previous findings, women reported higher Big Five Extraversion, Agreeableness, and Neuroticism scores than men. However, more extensive gender differences were found at the level of the aspects, with significant gender differences appearing in both aspects of every Big Five trait. For Extraversion, Openness, and Conscientiousness, the gender differences were found to diverge at the aspect level, rendering them either small or undetectable at the Big Five level. These findings clarify the nature of gender differences in personality and highlight the utility of measuring personality at the aspect level.

  5. Gender Differences in Personality across the Ten Aspects of the Big Five

    PubMed Central

    Weisberg, Yanna J.; DeYoung, Colin G.; Hirsh, Jacob B.

    2011-01-01

    This paper investigates gender differences in personality traits, both at the level of the Big Five and at the sublevel of two aspects within each Big Five domain. Replicating previous findings, women reported higher Big Five Extraversion, Agreeableness, and Neuroticism scores than men. However, more extensive gender differences were found at the level of the aspects, with significant gender differences appearing in both aspects of every Big Five trait. For Extraversion, Openness, and Conscientiousness, the gender differences were found to diverge at the aspect level, rendering them either small or undetectable at the Big Five level. These findings clarify the nature of gender differences in personality and highlight the utility of measuring personality at the aspect level. PMID:21866227

  6. Survey of Cyber Crime in Big Data

    NASA Astrophysics Data System (ADS)

    Rajeswari, C.; Soni, Krishna; Tandon, Rajat

    2017-11-01

    Big data is like performing computation operations and database operations for large amounts of data, automatically from the data possessor’s business. Since a critical strategic offer of big data access to information from numerous and various areas, security and protection will assume an imperative part in big data research and innovation. The limits of standard IT security practices are notable, with the goal that they can utilize programming sending to utilize programming designers to incorporate pernicious programming in a genuine and developing risk in applications and working frameworks, which are troublesome. The impact gets speedier than big data. In this way, one central issue is that security and protection innovation are sufficient to share controlled affirmation for countless direct get to. For powerful utilization of extensive information, it should be approved to get to the information of that space or whatever other area from a space. For a long time, dependable framework improvement has arranged a rich arrangement of demonstrated ideas of demonstrated security to bargain to a great extent with the decided adversaries, however this procedure has been to a great extent underestimated as “needless excess” and sellers In this discourse, essential talks will be examined for substantial information to exploit this develop security and protection innovation, while the rest of the exploration difficulties will be investigated.

  7. Moving Another Big Desk.

    ERIC Educational Resources Information Center

    Fawcett, Gay

    1996-01-01

    New ways of thinking about leadership require that leaders move their big desks and establish environments that encourage trust and open communication. Educational leaders must trust their colleagues to make wise choices. When teachers are treated democratically as leaders, classrooms will also become democratic learning organizations. (SM)

  8. HARNESSING BIG DATA FOR PRECISION MEDICINE: INFRASTRUCTURES AND APPLICATIONS.

    PubMed

    Yu, Kun-Hsing; Hart, Steven N; Goldfeder, Rachel; Zhang, Qiangfeng Cliff; Parker, Stephen C J; Snyder, Michael

    2017-01-01

    Precision medicine is a health management approach that accounts for individual differences in genetic backgrounds and environmental exposures. With the recent advancements in high-throughput omics profiling technologies, collections of large study cohorts, and the developments of data mining algorithms, big data in biomedicine is expected to provide novel insights into health and disease states, which can be translated into personalized disease prevention and treatment plans. However, petabytes of biomedical data generated by multiple measurement modalities poses a significant challenge for data analysis, integration, storage, and result interpretation. In addition, patient privacy preservation, coordination between participating medical centers and data analysis working groups, as well as discrepancies in data sharing policies remain important topics of discussion. In this workshop, we invite experts in omics integration, biobank research, and data management to share their perspectives on leveraging big data to enable precision medicine.Workshop website: http://tinyurl.com/PSB17BigData; HashTag: #PSB17BigData.

  9. The BigBOSS spectrograph

    NASA Astrophysics Data System (ADS)

    Jelinsky, Patrick; Bebek, Chris; Besuner, Robert; Carton, Pierre-Henri; Edelstein, Jerry; Lampton, Michael; Levi, Michael E.; Poppett, Claire; Prieto, Eric; Schlegel, David; Sholl, Michael

    2012-09-01

    BigBOSS is a proposed ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a 14,000 square degree galaxy and quasi-stellar object redshift survey. It consists of a 5,000- fiber-positioner focal plane feeding the spectrographs. The optical fibers are separated into ten 500 fiber slit heads at the entrance of ten identical spectrographs in a thermally insulated room. Each of the ten spectrographs has a spectral resolution (λ/Δλ) between 1500 and 4000 over a wavelength range from 360 - 980 nm. Each spectrograph uses two dichroic beam splitters to separate the spectrograph into three arms. It uses volume phase holographic (VPH) gratings for high efficiency and compactness. Each arm uses a 4096x4096 15 μm pixel charge coupled device (CCD) for the detector. We describe the requirements and current design of the BigBOSS spectrograph. Design trades (e.g. refractive versus reflective) and manufacturability are also discussed.

  10. COBE looks back to the Big Bang

    NASA Technical Reports Server (NTRS)

    Mather, John C.

    1993-01-01

    An overview is presented of NASA-Goddard's Cosmic Background Explorer (COBE), the first NASA satellite designed to observe the primeval explosion of the universe. The spacecraft carries three extremely sensitive IR and microwave instruments designed to measure the faint residual radiation from the Big Bang and to search for the formation of the first galaxies. COBE's far IR absolute spectrophotometer has shown that the Big Bang radiation has a blackbody spectrum, proving that there was no large energy release after the explosion.

  11. A survey on platforms for big data analytics.

    PubMed

    Singh, Dilpreet; Reddy, Chandan K

    The primary purpose of this paper is to provide an in-depth analysis of different platforms available for performing big data analytics. This paper surveys different hardware platforms available for big data analytics and assesses the advantages and drawbacks of each of these platforms based on various metrics such as scalability, data I/O rate, fault tolerance, real-time processing, data size supported and iterative task support. In addition to the hardware, a detailed description of the software frameworks used within each of these platforms is also discussed along with their strengths and drawbacks. Some of the critical characteristics described here can potentially aid the readers in making an informed decision about the right choice of platforms depending on their computational needs. Using a star ratings table, a rigorous qualitative comparison between different platforms is also discussed for each of the six characteristics that are critical for the algorithms of big data analytics. In order to provide more insights into the effectiveness of each of the platform in the context of big data analytics, specific implementation level details of the widely used k-means clustering algorithm on various platforms are also described in the form pseudocode.

  12. Big data: the management revolution.

    PubMed

    McAfee, Andrew; Brynjolfsson, Erik

    2012-10-01

    Big data, the authors write, is far more powerful than the analytics of the past. Executives can measure and therefore manage more precisely than ever before. They can make better predictions and smarter decisions. They can target more-effective interventions in areas that so far have been dominated by gut and intuition rather than by data and rigor. The differences between big data and analytics are a matter of volume, velocity, and variety: More data now cross the internet every second than were stored in the entire internet 20 years ago. Nearly real-time information makes it possible for a company to be much more agile than its competitors. And that information can come from social networks, images, sensors, the web, or other unstructured sources. The managerial challenges, however, are very real. Senior decision makers have to learn to ask the right questions and embrace evidence-based decision making. Organizations must hire scientists who can find patterns in very large data sets and translate them into useful business information. IT departments have to work hard to integrate all the relevant internal and external sources of data. The authors offer two success stories to illustrate how companies are using big data: PASSUR Aerospace enables airlines to match their actual and estimated arrival times. Sears Holdings directly analyzes its incoming store data to make promotions much more precise and faster.

  13. Hubble Spies Big Bang Frontiers

    NASA Image and Video Library

    2017-12-08

    Observations by the NASA/ESA Hubble Space Telescope have taken advantage of gravitational lensing to reveal the largest sample of the faintest and earliest known galaxies in the universe. Some of these galaxies formed just 600 million years after the big bang and are fainter than any other galaxy yet uncovered by Hubble. The team has determined for the first time with some confidence that these small galaxies were vital to creating the universe that we see today. An international team of astronomers, led by Hakim Atek of the Ecole Polytechnique Fédérale de Lausanne, Switzerland, has discovered over 250 tiny galaxies that existed only 600-900 million years after the big bang— one of the largest samples of dwarf galaxies yet to be discovered at these epochs. The light from these galaxies took over 12 billion years to reach the telescope, allowing the astronomers to look back in time when the universe was still very young. Read more: www.nasa.gov/feature/goddard/hubble-spies-big-bang-frontiers Credit: NASA/ESA NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  14. Myocardial Perfusion SPECT 2015 in Germany

    PubMed Central

    Burchert, Wolfgang; Schäfer, Wolfgang; Hacker, Marcus

    2016-01-01

    Summary Aim The working group Cardiovascular Nuclear Medicine of the German Society of Nuclear Medicine presents the results of the 7th survey of myocardial perfusion SPECT (MPS) of the reporting year 2015. Method 268 questionnaires (173 practices [PR], 67 hospitals [HO], 28 university hospitals [UH]) were evaluated. Results of the last survey from 2012 are set in squared brackets. Results MPS of 121 939 [105 941] patients were reported. 98 % [95 %] of all MPS were performed with Tc-99m radiopharmaceuticals and 2 % [5 %] with Tl-201. 78 % [79 %] of all patients were studied in PR, 14 % [15 %] in HO, and 8 % [6 %] in UH. A pharmacological stress test was performed in 43 % [39 %] (22 % [24 %] adenosine, 20 % [9 %] regadenoson, 1% [6 %] dipyridamole or dobutamine). Attenuation correction was applied in 25 % [2009: 10 %] of MPS. Gated SPECT was performed in 78 % [70 %] of all rest MPS, in 80 % [73 %] of all stress and in 76 % [67 %] of all stress and rest MPS. 53 % [33 %] of all nuclear medicine departments performed MPS scoring by default, whereas 24 % [41 %] did not apply any quantification. 31 % [26 %] of all departments noticed an increase in their counted MPS and 29 % [29 %] no changes. Data from 89 departments which participated in all surveys showed an increase in MPS count of 11.1 % (PR: 12.2 %, HO: 4.8 %, UH: 18.4 %). 70 % [60 %] of the MPS were requested by ambulatory care cardiologists. Conclusion The 2015 MPS survey reveals a high-grade adherence of routine MPS practice to current guidelines. The positive trend in MPS performance and number of MPS already observed in 2012 continues. Educational training remains necessary in the field of SPECT scoring. PMID:27909712

  15. "Small Steps, Big Rewards": You Can Prevent Type 2 Diabetes

    MedlinePlus

    ... Home Current Issue Past Issues Special Section "Small Steps, Big Rewards": You Can Prevent Type 2 Diabetes ... onset. Those are the basic facts of "Small Steps. Big Rewards: Prevent type 2 Diabetes," created by ...

  16. Big Data Solution for CTBT Monitoring Using Global Cross Correlation

    NASA Astrophysics Data System (ADS)

    Gaillard, P.; Bobrov, D.; Dupont, A.; Grenouille, A.; Kitov, I. O.; Rozhkov, M.

    2014-12-01

    Due to the mismatch between data volume and the performance of the Information Technology infrastructure used in seismic data centers, it becomes more and more difficult to process all the data with traditional applications in a reasonable elapsed time. To fulfill their missions, the International Data Centre of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO/IDC) and the Département Analyse Surveillance Environnement of Commissariat à l'Energie atomique et aux énergies alternatives (CEA/DASE) collect, process and produce complex data sets whose volume is growing exponentially. In the medium term, computer architectures, data management systems and application algorithms will require fundamental changes to meet the needs. This problem is well known and identified as a "Big Data" challenge. To tackle this major task, the CEA/DASE takes part during two years to the "DataScale" project. Started in September 2013, DataScale gathers a large set of partners (research laboratories, SMEs and big companies). The common objective is to design efficient solutions using the synergy between Big Data solutions and the High Performance Computing (HPC). The project will evaluate the relevance of these technological solutions by implementing a demonstrator for seismic event detections thanks to massive waveform correlations. The IDC has developed an expertise on such techniques leading to an algorithm called "Master Event" and provides a high-quality dataset for an extensive cross correlation study. The objective of the project is to enhance the Master Event algorithm and to reanalyze 10 years of waveform data from the International Monitoring System (IMS) network thanks to a dedicated HPC infrastructure operated by the "Centre de Calcul Recherche et Technologie" at the CEA of Bruyères-le-Châtel. The dataset used for the demonstrator includes more than 300,000 seismic events, tens of millions of raw detections and more than 30 terabytes of continuous seismic data

  17. Using Dynamic Interface Modeling and Simulation to Develop a Launch and Recovery Flight Simulation for a UH-60A Blackhawk

    NASA Technical Reports Server (NTRS)

    Sweeney, Christopher; Bunnell, John; Chung, William; Giovannetti, Dean; Mikula, Julie; Nicholson, Bob; Roscoe, Mike

    2001-01-01

    Joint Shipboard Helicopter Integration Process (JSHIP) is a Joint Test and Evaluation (JT&E) program sponsored by the Office of the Secretary of Defense (OSD). Under the JSHDP program is a simulation effort referred to as the Dynamic Interface Modeling and Simulation System (DIMSS). The purpose of DIMSS is to develop and test the processes and mechanisms that facilitate ship-helicopter interface testing via man-in-the-loop ground-based flight simulators. Specifically, the DIMSS charter is to develop an accredited process for using a flight simulator to determine the wind-over-the-deck (WOD) launch and recovery flight envelope for the UH-60A ship/helicopter combination. DIMSS is a collaborative effort between the NASA Ames Research Center and OSD. OSD determines the T&E and warfighter training requirements, provides the programmatics and dynamic interface T&E experience, and conducts ship/aircraft interface tests for validating the simulation. NASA provides the research and development element, simulation facility, and simulation technical experience. This paper will highlight the benefits of the NASA/JSHIP collaboration and detail achievements of the project in terms of modeling and simulation. The Vertical Motion Simulator (VMS) at NASA Ames Research Center offers the capability to simulate a wide range of simulation cueing configurations, which include visual, aural, and body-force cueing devices. The system flexibility enables switching configurations io allow back-to-back evaluation and comparison of different levels of cueing fidelity in determining minimum training requirements. The investigation required development and integration of several major simulation system at the VMS. A new UH-60A BlackHawk interchangeable cab that provides an out-the-window (OTW) field-of-view (FOV) of 220 degrees in azimuth and 70 degrees in elevation was built. Modeling efforts involved integrating Computational Fluid Dynamics (CFD) generated data of an LHA ship airwake and

  18. 'Big data' in mental health research: current status and emerging possibilities.

    PubMed

    Stewart, Robert; Davis, Katrina

    2016-08-01

    'Big data' are accumulating in a multitude of domains and offer novel opportunities for research. The role of these resources in mental health investigations remains relatively unexplored, although a number of datasets are in use and supporting a range of projects. We sought to review big data resources and their use in mental health research to characterise applications to date and consider directions for innovation in future. A narrative review. Clear disparities were evident in geographic regions covered and in the disorders and interventions receiving most attention. We discuss the strengths and weaknesses of the use of different types of data and the challenges of big data in general. Current research output from big data is still predominantly determined by the information and resources available and there is a need to reverse the situation so that big data platforms are more driven by the needs of clinical services and service users.

  19. Big Data: You Are Adding to . . . and Using It

    ERIC Educational Resources Information Center

    Makela, Carole J.

    2016-01-01

    "Big data" prompts a whole lexicon of terms--data flow; analytics; data mining; data science; smart you name it (cars, houses, cities, wearables, etc.); algorithms; learning analytics; predictive analytics; data aggregation; data dashboards; digital tracks; and big data brokers. New terms are being coined frequently. Are we paying…

  20. The use of big data in transfusion medicine.

    PubMed

    Pendry, K

    2015-06-01

    'Big data' refers to the huge quantities of digital information now available that describe much of human activity. The science of data management and analysis is rapidly developing to enable organisations to convert data into useful information and knowledge. Electronic health records and new developments in Pathology Informatics now support the collection of 'big laboratory and clinical data', and these digital innovations are now being applied to transfusion medicine. To use big data effectively, we must address concerns about confidentiality and the need for a change in culture and practice, remove barriers to adopting common operating systems and data standards and ensure the safe and secure storage of sensitive personal information. In the UK, the aim is to formulate a single set of data and standards for communicating test results and so enable pathology data to contribute to national datasets. In transfusion, big data has been used for benchmarking, detection of transfusion-related complications, determining patterns of blood use and definition of blood order schedules for surgery. More generally, rapidly available information can monitor compliance with key performance indicators for patient blood management and inventory management leading to better patient care and reduced use of blood. The challenges of enabling reliable systems and analysis of big data and securing funding in the restrictive financial climate are formidable, but not insurmountable. The promise is that digital information will soon improve the implementation of best practice in transfusion medicine and patient blood management globally. © 2015 British Blood Transfusion Society.

  1. Records for conversion of laser energy to nuclear energy in exploding nanostructures

    NASA Astrophysics Data System (ADS)

    Jortner, Joshua; Last, Isidore

    2017-09-01

    Table-top nuclear fusion reactions in the chemical physics laboratory can be driven by high-energy dynamics of Coulomb exploding, multicharged, deuterium containing nanostructures generated by ultraintense, femtosecond, near-infrared laser pulses. Theoretical-computational studies of table-top laser-driven nuclear fusion of high-energy (up to 15 MeV) deuterons with 7Li, 6Li and D nuclei demonstrate the attainment of high fusion yields within a source-target reaction design, which constitutes the highest table-top fusion efficiencies obtained up to date. The conversion efficiency of laser energy to nuclear energy (0.1-1.0%) for table-top fusion is comparable to that for DT fusion currently accomplished for 'big science' inertial fusion setups.

  2. Statistical methods and computing for big data.

    PubMed

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing; Yan, Jun

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay.

  3. Big data, smart cities and city planning

    PubMed Central

    2013-01-01

    I define big data with respect to its size but pay particular attention to the fact that the data I am referring to is urban data, that is, data for cities that are invariably tagged to space and time. I argue that this sort of data are largely being streamed from sensors, and this represents a sea change in the kinds of data that we have about what happens where and when in cities. I describe how the growth of big data is shifting the emphasis from longer term strategic planning to short-term thinking about how cities function and can be managed, although with the possibility that over much longer periods of time, this kind of big data will become a source for information about every time horizon. By way of conclusion, I illustrate the need for new theory and analysis with respect to 6 months of smart travel card data of individual trips on Greater London’s public transport systems. PMID:29472982

  4. Statistical methods and computing for big data

    PubMed Central

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593

  5. Big data in food safety: An overview.

    PubMed

    Marvin, Hans J P; Janssen, Esmée M; Bouzembrak, Yamine; Hendriksen, Peter J M; Staats, Martijn

    2017-07-24

    Technology is now being developed that is able to handle vast amounts of structured and unstructured data from diverse sources and origins. These technologies are often referred to as big data, and open new areas of research and applications that will have an increasing impact in all sectors of our society. In this paper we assessed to which extent big data is being applied in the food safety domain and identified several promising trends. In several parts of the world, governments stimulate the publication on internet of all data generated in public funded research projects. This policy opens new opportunities for stakeholders dealing with food safety to address issues which were not possible before. Application of mobile phones as detection devices for food safety and the use of social media as early warning of food safety problems are a few examples of the new developments that are possible due to big data.

  6. bwtool: a tool for bigWig files

    PubMed Central

    Pohl, Andy; Beato, Miguel

    2014-01-01

    BigWig files are a compressed, indexed, binary format for genome-wide signal data for calculations (e.g. GC percent) or experiments (e.g. ChIP-seq/RNA-seq read depth). bwtool is a tool designed to read bigWig files rapidly and efficiently, providing functionality for extracting data and summarizing it in several ways, globally or at specific regions. Additionally, the tool enables the conversion of the positions of signal data from one genome assembly to another, also known as ‘lifting’. We believe bwtool can be useful for the analyst frequently working with bigWig data, which is becoming a standard format to represent functional signals along genomes. The article includes supplementary examples of running the software. Availability and implementation: The C source code is freely available under the GNU public license v3 at http://cromatina.crg.eu/bwtool. Contact: andrew.pohl@crg.eu, andypohl@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24489365

  7. Big data, smart cities and city planning.

    PubMed

    Batty, Michael

    2013-11-01

    I define big data with respect to its size but pay particular attention to the fact that the data I am referring to is urban data, that is, data for cities that are invariably tagged to space and time. I argue that this sort of data are largely being streamed from sensors, and this represents a sea change in the kinds of data that we have about what happens where and when in cities. I describe how the growth of big data is shifting the emphasis from longer term strategic planning to short-term thinking about how cities function and can be managed, although with the possibility that over much longer periods of time, this kind of big data will become a source for information about every time horizon. By way of conclusion, I illustrate the need for new theory and analysis with respect to 6 months of smart travel card data of individual trips on Greater London's public transport systems.

  8. Acute Kidney Injury and Big Data.

    PubMed

    Sutherland, Scott M; Goldstein, Stuart L; Bagshaw, Sean M

    2018-01-01

    The recognition of a standardized, consensus definition for acute kidney injury (AKI) has been an important milestone in critical care nephrology, which has facilitated innovation in prevention, quality of care, and outcomes research among the growing population of hospitalized patients susceptible to AKI. Concomitantly, there have been substantial advances in "big data" technologies in medicine, including electronic health records (EHR), data registries and repositories, and data management and analytic methodologies. EHRs are increasingly being adopted, clinical informatics is constantly being refined, and the field of EHR-enabled care improvement and research has grown exponentially. While these fields have matured independently, integrating the two has the potential to redefine and integrate AKI-related care and research. AKI is an ideal condition to exploit big data health care innovation for several reasons: AKI is common, increasingly encountered in hospitalized settings, imposes meaningful risk for adverse events and poor outcomes, has incremental cost implications, and has been plagued by suboptimal quality of care. In this concise review, we discuss the potential applications of big data technologies, particularly modern EHR platforms and health data repositories, to transform our capacity for AKI prediction, detection, and care quality. © 2018 S. Karger AG, Basel.

  9. Entering the 'big data' era in medicinal chemistry: molecular promiscuity analysis revisited.

    PubMed

    Hu, Ye; Bajorath, Jürgen

    2017-06-01

    The 'big data' concept plays an increasingly important role in many scientific fields. Big data involves more than unprecedentedly large volumes of data that become available. Different criteria characterizing big data must be carefully considered in computational data mining, as we discuss herein focusing on medicinal chemistry. This is a scientific discipline where big data is beginning to emerge and provide new opportunities. For example, the ability of many drugs to specifically interact with multiple targets, termed promiscuity, forms the molecular basis of polypharmacology, a hot topic in drug discovery. Compound promiscuity analysis is an area that is much influenced by big data phenomena. Different results are obtained depending on chosen data selection and confidence criteria, as we also demonstrate.

  10. Comparative effectiveness research and big data: balancing potential with legal and ethical considerations.

    PubMed

    Gray, Elizabeth Alexandra; Thorpe, Jane Hyatt

    2015-01-01

    Big data holds big potential for comparative effectiveness research. The ability to quickly synthesize and use vast amounts of health data to compare medical interventions across settings of care, patient populations, payers and time will greatly inform efforts to improve quality, reduce costs and deliver more patient-centered care. However, the use of big data raises significant legal and ethical issues that may present barriers or limitations to the full potential of big data. This paper addresses the scope of some of these legal and ethical issues and how they may be managed effectively to fully realize the potential of big data.

  11. [Medical big data and precision medicine: prospects of epidemiology].

    PubMed

    Song, J; Hu, Y H

    2016-08-10

    Since the development of high-throughput technology, electronic medical record system and big data technology, the value of medical data has caused more attention. On the other hand, the proposal of Precision Medicine Initiative opens up the prospect for medical big data. As a Tool-related Discipline, Epidemiology is, focusing on exploitation the resources of existing big data and promoting the integration of translational research and knowledge to completely unlocking the "black box" of exposure-disease continuum. It also tries to accelerating the realization of the ultimate goal on precision medicine. The overall purpose, however is to translate the evidence from scientific research to improve the health of the people.

  12. Big bang photosynthesis and pregalactic nucleosynthesis of light elements

    NASA Technical Reports Server (NTRS)

    Audouze, J.; Lindley, D.; Silk, J.

    1985-01-01

    Two nonstandard scenarios for pregalactic synthesis of the light elements (H-2, He-3, He-4, and Li-7) are developed. Big bang photosynthesis occurs if energetic photons, produced by the decay of massive neutrinos or gravitinos, partially photodisintegrate He-4 (formed in the standard hot big bang) to produce H-2 and He-3. In this case, primordial nucleosynthesis no longer constrains the baryon density of the universe, or the number of neutrino species. Alternatively, one may dispense partially or completely with the hot big bang and produce the light elements by bombardment of primordial gas, provided that He-4 is synthesized by a later generation of massive stars.

  13. How Big is Earth?

    NASA Astrophysics Data System (ADS)

    Thurber, Bonnie B.

    2015-08-01

    How Big is Earth celebrates the Year of Light. Using only the sunlight striking the Earth and a wooden dowel, students meet each other and then measure the circumference of the earth. Eratosthenes did it over 2,000 years ago. In Cosmos, Carl Sagan shared the process by which Eratosthenes measured the angle of the shadow cast at local noon when sunlight strikes a stick positioned perpendicular to the ground. By comparing his measurement to another made a distance away, Eratosthenes was able to calculate the circumference of the earth. How Big is Earth provides an online learning environment where students do science the same way Eratosthenes did. A notable project in which this was done was The Eratosthenes Project, conducted in 2005 as part of the World Year of Physics; in fact, we will be drawing on the teacher's guide developed by that project.How Big Is Earth? expands on the Eratosthenes project by providing an online learning environment provided by the iCollaboratory, www.icollaboratory.org, where teachers and students from Sweden, China, Nepal, Russia, Morocco, and the United States collaborate, share data, and reflect on their learning of science and astronomy. They are sharing their information and discussing their ideas/brainstorming the solutions in a discussion forum. There is an ongoing database of student measurements and another database to collect data on both teacher and student learning from surveys, discussions, and self-reflection done online.We will share our research about the kinds of learning that takes place only in global collaborations.The entrance address for the iCollaboratory is http://www.icollaboratory.org.

  14. Big Data in Health: a Literature Review from the Year 2005.

    PubMed

    de la Torre Díez, Isabel; Cosgaya, Héctor Merino; Garcia-Zapirain, Begoña; López-Coronado, Miguel

    2016-09-01

    The information stored in healthcare systems has increased over the last ten years, leading it to be considered Big Data. There is a wealth of health information ready to be analysed. However, the sheer volume raises a challenge for traditional methods. The aim of this article is to conduct a cutting-edge study on Big Data in healthcare from 2005 to the present. This literature review will help researchers to know how Big Data has developed in the health industry and open up new avenues for research. Information searches have been made on various scientific databases such as Pubmed, Science Direct, Scopus and Web of Science for Big Data in healthcare. The search criteria were "Big Data" and "health" with a date range from 2005 to the present. A total of 9724 articles were found on the databases. 9515 articles were discarded as duplicates or for not having a title of interest to the study. 209 articles were read, with the resulting decision that 46 were useful for this study. 52.6 % of the articles used were found in Science Direct, 23.7 % in Pubmed, 22.1 % through Scopus and the remaining 2.6 % through the Web of Science. Big Data has undergone extremely high growth since 2011 and its use is becoming compulsory in developed nations and in an increasing number of developing nations. Big Data is a step forward and a cost reducer for public and private healthcare.

  15. Effect of fungicides on Wyoming big sagebrush seed germination

    Treesearch

    Robert D. Cox; Lance H. Kosberg; Nancy L. Shaw; Stuart P. Hardegree

    2011-01-01

    Germination tests of Wyoming big sagebrush (Artemisia tridentata Nutt. ssp. wyomingensis Beetle & Young [Asteraceae]) seeds often exhibit fungal contamination, but the use of fungicides should be avoided because fungicides may artificially inhibit germination. We tested the effect of seed-applied fungicides on germination of Wyoming big sagebrush at 2 different...

  16. AmeriFlux US-Rws Reynolds Creek Wyoming big sagebrush

    DOE Data Explorer

    Flerchinger, Gerald [USDA Agricultural Research Service

    2017-01-01

    This is the AmeriFlux version of the carbon flux data for the site US-Rws Reynolds Creek Wyoming big sagebrush. Site Description - The site is located on the USDA-ARS's Reynolds Creek Experimental Watershed. It is dominated by Wyoming big sagebrush on land managed by USDI Bureau of Land Management.

  17. Who's Afraid of the Big Black Man?

    ERIC Educational Resources Information Center

    Johnson, Jason Kyle

    2018-01-01

    This article examines the experiences of big Black men in both their personal and professional lives. Black men are often perceived as being aggressive, violent, and physically larger than their White counterparts. The negative perceptions of Black men, particularly big Black men, often leads to negative encounters with police, educators, and…

  18. Big Data for cardiology: novel discovery?

    PubMed

    Mayer-Schönberger, Viktor

    2016-03-21

    Big Data promises to change cardiology through a massive increase in the data gathered and analysed; but its impact goes beyond improving incrementally existing methods. The potential of comprehensive data sets for scientific discovery is examined, and its impact on the scientific method generally and cardiology in particular is posited, together with likely consequences for research and practice. Big Data in cardiology changes how new insights are being discovered. For it to flourish, significant modifications in the methods, structures, and institutions of the profession are necessary. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.

  19. Big Data Analytics in Chemical Engineering.

    PubMed

    Chiang, Leo; Lu, Bo; Castillo, Ivan

    2017-06-07

    Big data analytics is the journey to turn data into insights for more informed business and operational decisions. As the chemical engineering community is collecting more data (volume) from different sources (variety), this journey becomes more challenging in terms of using the right data and the right tools (analytics) to make the right decisions in real time (velocity). This article highlights recent big data advancements in five industries, including chemicals, energy, semiconductors, pharmaceuticals, and food, and then discusses technical, platform, and culture challenges. To reach the next milestone in multiplying successes to the enterprise level, government, academia, and industry need to collaboratively focus on workforce development and innovation.

  20. What is worse than the “big one”?

    USGS Publications Warehouse

    Kerr, R. A.

    1988-01-01

    The first thought in the minds of many residents of the city of Whittier when the first shock hit them was "Is this the big one?" the San Andreas' once-in-150-years great shaker? It might as well have been for Whittier, which is 20 kilometers east of downtown Los Angeles. The ground shook harder there this month than it will when the big one does strike the distant San Andreas, which lies 50 kilometers on the other side of the mountains. And this was only a moderate, magnitude 6.1 shock. Earthquake of magnitude 7 and large 30 times more powerful, could rupture faults beneath the feet of Angelenos at any time. The loss of life and destruction could exceed that caused by the big one.