Sample records for zone big sioux

  1. 76 FR 53827 - Safety Zone; Big Sioux River From the Military Road Bridge North Sioux City to the Confluence of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-30

    ...-AA00 Safety Zone; Big Sioux River From the Military Road Bridge North Sioux City to the Confluence of... rule extends the existing temporary safety zone on the Big Sioux River from the Military Road Bridge in... period. SUMMARY: The Coast Guard is extending the effective period for the temporary safety zone...

  2. Construction of a groundwater-flow model for the Big Sioux Aquifer using airborne electromagnetic methods, Sioux Falls, South Dakota

    USGS Publications Warehouse

    Valder, Joshua F.; Delzer, Gregory C.; Carter, Janet M.; Smith, Bruce D.; Smith, David V.

    2016-09-28

    The city of Sioux Falls is the fastest growing community in South Dakota. In response to this continued growth and planning for future development, Sioux Falls requires a sustainable supply of municipal water. Planning and managing sustainable groundwater supplies requires a thorough understanding of local groundwater resources. The Big Sioux aquifer consists of glacial outwash sands and gravels and is hydraulically connected to the Big Sioux River, which provided about 90 percent of the city’s source-water production in 2015. Managing sustainable groundwater supplies also requires an understanding of groundwater availability. An effective mechanism to inform water management decisions is the development and utilization of a groundwater-flow model. A groundwater-flow model provides a quantitative framework for synthesizing field information and conceptualizing hydrogeologic processes. These groundwater-flow models can support decision making processes by mapping and characterizing the aquifer. Accordingly, the city of Sioux Falls partnered with the U.S. Geological Survey to construct a groundwater-flow model. Model inputs will include data from advanced geophysical techniques, specifically airborne electromagnetic methods.

  3. BIG SIOUX RIVER DRAINAGE BASIN INFORMATION OUTREACH PROJECT

    EPA Science Inventory

    The main goal of the proposed project is to raise public awareness about the importance of protecting the Big Sioux River drainage basin. To accomplish this goal, the City and its partnering agencies are seeking to expand and improve public accessibility to a wide variety of r...

  4. Delineation of the hydrogeologic framework of the Big Sioux aquifer near Sioux Falls, South Dakota, using airborne electromagnetic data

    USGS Publications Warehouse

    Valseth, Kristen J.; Delzer, Gregory C.; Price, Curtis V.

    2018-03-21

    The U.S. Geological Survey, in cooperation with the City of Sioux Falls, South Dakota, began developing a groundwater-flow model of the Big Sioux aquifer in 2014 that will enable the City to make more informed water management decisions, such as delineation of areas of the greatest specific yield, which is crucial for locating municipal wells. Innovative tools are being evaluated as part of this study that can improve the delineation of the hydrogeologic framework of the aquifer for use in development of a groundwater-flow model, and the approach could have transfer value for similar hydrogeologic settings. The first step in developing a groundwater-flow model is determining the hydrogeologic framework (vertical and horizontal extents of the aquifer), which typically is determined by interpreting geologic information from drillers’ logs and surficial geology maps. However, well and borehole data only provide hydrogeologic information for a single location; conversely, nearly continuous geophysical data are collected along flight lines using airborne electromagnetic (AEM) surveys. These electromagnetic data are collected every 3 meters along a flight line (on average) and subsequently can be related to hydrogeologic properties. AEM data, coupled with and constrained by well and borehole data, can substantially improve the accuracy of aquifer hydrogeologic framework delineations and result in better groundwater-flow models. AEM data were acquired using the Resolve frequency-domain AEM system to map the Big Sioux aquifer in the region of the city of Sioux Falls. The survey acquired more than 870 line-kilometers of AEM data over a total area of about 145 square kilometers, primarily over the flood plain of the Big Sioux River between the cities of Dell Rapids and Sioux Falls. The U.S. Geological Survey inverted the survey data to generate resistivity-depth sections that were used in two-dimensional maps and in three-dimensional volumetric visualizations of the Earth

  5. 77 FR 27714 - Foreign-Trade Zone 220-Sioux Falls, SD; Application for Reorganization and Expansion Under...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-11

    ... DEPARTMENT OF COMMERCE Foreign-Trade Zones Board [B-35-2012] Foreign-Trade Zone 220--Sioux Falls... been submitted to the Foreign-Trade Zones (FTZ) Board (the Board) by the Sioux Falls Development... Field, 2801 Jaycee Lane, Sioux Falls; Site 2 (123 acres)--Sioux Falls Development Foundation Park III...

  6. A digital-computer model of the Big Sioux aquifer in Minnehaha County, South Dakota

    USGS Publications Warehouse

    Koch, N.C.

    1982-01-01

    A finite-difference digital model was used to simulate steady-state conditions of the Big Sioux aquifer in Minnehaha County. Average water levels and average base flow discharge (4.9 cu ft/s) of the Big Sioux River were based on data from 1970 through 1979. The computer model was calibrated for transient conditions by simulating monthly historic conditions for 1976. During 1976, pumpage was offset mostly by surface-water recharge to the aquifer from January through June and ground-water discharge from storage from July through December. Measured drawdowns during 1976 generally were less than 2 feet except in the Sioux Falls city well field where drawdowns were as much as 15 feet. The model was used to study the effects of increased withdrawals under three hypothetical hydrologic situations. One hypothetical situation consisted of using 1976 pumping rates, recharge, and evapotranspiration but the Big Sioux River dry. The pumping rate after 16 months was decreased by 40 percent from the actual pumping rate for that month in order to complete the monthly simulation without the storage being depleted at a nodal area. The second hypothetical situation consisted of a pumpage rate of 44.4 cubic feet per second from 60 wells spaced throughout the aquifer under historic 1976 hydrologic conditions. The results were that the aquifer could supply the additional withdrawal. The third hypothetical situation used the same hydrologic conditions as the second except that recharge was zero and the Big Sioux River was dry downstream from row 54. After 18 monthly simulations, the pumping rate was decreased by 44 percent to prevent pumping wells from depleting the aquifer, and, at that rate, 63 percent of the water being pumped was being replaced by water from the river. (USGS)

  7. 77 FR 59890 - Reorganization and Expansion of Foreign-Trade Zone 220 Under Alternative Site Framework; Sioux...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-01

    ... Foreign-Trade Zone 220 Under Alternative Site Framework; Sioux Falls, SD Pursuant to its authority under... Sioux Falls Development Foundation, grantee of Foreign-Trade Zone 220, submitted an application to the... Dakota, within and adjacent to the Sioux Falls U.S. Customs and Border Protection port of entry, FTZ 220...

  8. Occurrence of organic wastewater compounds in drinking water, wastewater effluent, and the Big Sioux River in or near Sioux Falls, South Dakota, 2001-2004

    USGS Publications Warehouse

    Sando, Steven K.; Furlong, Edward T.; Gray, James L.; Meyer, Michael T.

    2006-01-01

    The U.S. Geological Survey (USGS) in cooperation with the city of Sioux Falls conducted several rounds of sampling to determine the occurrence of organic wastewater compounds (OWCs) in the city of Sioux Falls drinking water and waste-water effluent, and the Big Sioux River in or near Sioux Falls during August 2001 through May 2004. Water samples were collected during both base-flow and storm-runoff conditions. Water samples were collected at 8 sites, which included 4 sites upstream from the wastewater treatment plant (WWTP) discharge, 2 sites downstream from the WWTP discharge, 1 finished drinking-water site, and 1 WWTP effluent (WWE) site. A total of 125 different OWCs were analyzed for in this study using five different analytical methods. Analyses for OWCs were performed at USGS laboratories that are developing and/or refining small-concentration (less than 1 microgram per liter (ug/L)) analytical methods. The OWCs were classified into six compound classes: human pharmaceutical compounds (HPCs); human and veterinary antibiotic compounds (HVACs); major agricultural herbicides (MAHs); household, industrial,and minor agricultural compounds (HIACs); polyaromatic hydrocarbons (PAHs); and sterol compounds (SCs). Some of the compounds in the HPC, MAH, HIAC, and PAH classes are suspected of being endocrine-disrupting compounds (EDCs). Of the 125 different OWCs analyzed for in this study, 81 OWCs had one or more detections in environmental samples reported by the laboratories, and of those 81 OWCs, 63 had acceptable analytical method performance, were detected at concentrations greater than the study reporting levels, and were included in analyses and discussion related to occurrence of OWCs in drinking water, wastewater effluent, and the Big Sioux River. OWCs in all compound classes were detected in water samples from sampling sites in the Sioux Falls area. For the five sampling periods when samples were collected from the Sioux Falls finished drinking water, only one

  9. 33 CFR 165.T11-0511 - Safety Zone; Big Sioux River from the Military Road Bridge North Sioux City to the confluence of...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the Military Road Bridge North Sioux City to the confluence of the Missouri River, SD. 165.T11-0511... River from the Military Road Bridge North Sioux City to the confluence of the Missouri River, SD. (a... Bridge, North Sioux City, SD at 42.52 degrees North, 096.48 West longitude to the confluence of the...

  10. A multifaceted approach to prioritize and design bank stabilization measures along the Big Sioux River, South Dakota, USA

    USDA-ARS?s Scientific Manuscript database

    A multifaceted approach was used to manage fine-grained sediment loadings from river bank erosion along the Big Sioux River between Dell Rapids and Sioux Falls, South Dakota, USA. Simulations with the RVR Meander and CONCEPTS river-morphodynamics computer models were conducted to identify stream-ban...

  11. 77 FR 54890 - Foreign-Trade Zone 220-Sioux Falls, SD; Authorization of Production Activity; Rosenbauer America...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-06

    ... DEPARTMENT OF COMMERCE Foreign-Trade Zones Board [B-33-2012] Foreign-Trade Zone 220--Sioux Falls, SD; Authorization of Production Activity; Rosenbauer America, LLC/Rosenbauer South Dakota, LLC, (Emergency Vehicles/Firefighting Equipment), Lyons, SD On April 30, 2012, the Sioux Falls Development Foundation, grantee of FTZ 220, submitted a...

  12. Occurrence of anthropogenic organic compounds and nutrients in source and finished water in the Sioux Falls area, South Dakota, 2009-10

    USGS Publications Warehouse

    Hoogestraat, Galen K.

    2012-01-01

    Anthropogenic organic compounds (AOCs) in drinking-water sources commonly are derived from municipal, agricultural, and industrial wastewater sources, and are a concern for water-supply managers. A cooperative study between the city of Sioux Falls, S. Dak., and the U.S. Geological Survey was initiated in 2009 to (1) characterize the occurrence of anthropogenic organic compounds in the source waters (groundwater and surface water) to water supplies in the Sioux Falls area, (2) determine if the compounds detected in the source waters also are present in the finished water, and (3) identify probable sources of nitrate in the Big Sioux River Basin and determine if sources change seasonally or under different hydrologic conditions. This report presents analytical results of water-quality samples collected from source waters and finished waters in the Sioux Falls area. The study approach included the collection of water samples from source and finished waters in the Sioux Falls area for the analyses of AOCs, nutrients, and nitrogen and oxygen isotopes in nitrate. Water-quality constituents monitored in this study were chosen to represent a variety of the contaminants known or suspected to occur within the Big Sioux River Basin, including pesticides, pharmaceuticals, sterols, household and industrial products, polycyclic aromatic hydrocarbons, antibiotics, and hormones. A total of 184 AOCs were monitored, of which 40 AOCs had relevant human-health benchmarks. During 11 sampling visits, 45 AOCs (24 percent) were detected in at least one sample of source or finished water, and 13 AOCs were detected in at least 20 percent of all samples. Concentrations of detected AOCs were all less than 1 microgram per liter, except for two AOCs in multiple samples from the Big Sioux River, and one AOC in finished-water samples. Concentrations of AOCs were less than 0.1 microgram per liter in more than 75 percent of the detections. Nutrient concentrations varied seasonally in source

  13. 78 FR 4431 - Santee Sioux Nation-Title XXI-Alcohol, Chapter 1.-Santee Sioux Nation Liquor Control Ordinance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-22

    ... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Santee Sioux Nation--Title XXI--Alcohol.... ACTION: Notice. SUMMARY: This notice publishes the Title XXI--Alcohol, Chapter 1.-- Santee Sioux Nation... Assistant Secretary--Indian Affairs. I certify that the Santee Sioux Tribal Council duly adopted Title XXI...

  14. Sioux City Foundry Company, South Sioux City, Nebraska - Clean Water Act Public Notice

    EPA Pesticide Factsheets

    The EPA is providing notice of a proposed Administrative Penalty Assessment against the Sioux City Foundry Company, an industry located at 2400 G Street, South Sioux City, NE, for alleged violations of the Clean Water Act, 33 U.S.C. § 1319(g) for discharge

  15. The U.S. Army’s Sioux Campaign of 1876: Identifying the Horse as the Center of Gravity of the Sioux

    DTIC Science & Technology

    2003-06-06

    THE U.S. ARMY’S SIOUX CAMPAIGN OF 1876: IDENTIFYING THE HORSE AS THE CENTER OF GRAVITY OF THE SIOUX A thesis presented to the Faculty of the U.S...DATES COVERED (FROM - TO) 05-08-2002 to 06-06-2003 4. TITLE AND SUBTITLE THE ARMY?S SIOUX CAMPAIGN OF 1876: IDENTIFYING THE HORSE AS THE CENTER OF GRAVITY...of the Sioux they would have realized that the ?hub of all power? or center of gravity of the Sioux was the horse , which every major aspect of Sioux

  16. Rosebud Sioux Wind Energy Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tony Rogers

    2008-04-30

    In 1998, through the vision of the late Alex “Little Soldier” Lunderman (1928-2000) and through the efforts of the Rosebud Sioux Tribal Utilities Commission, and with assistance from Intertribal Council on Utility Policy (COUP), and Distributed Generation, Inc (DISGEN). The Rosebud Sioux Tribe applied and was awarded in 1999 a DOE Cooperative Grant to build a commercial 750 Kw wind turbine, along with a 50/50 funding grant from the Department of Energy and a low interest loan from the Rural Utilities Service, United States Department of Agriculture, the Rosebud Sioux Tribe commissioned a single 750 kilowatt NEG Micon wind turbinemore » in March of 2003 near the Rosebud Casino. The Rosebud Sioux Wind Energy Project (Little Soldier “Akicita Cikala”) Turbine stands as a testament to the vision of a man and the Sicangu Oyate.« less

  17. South Dakota Air National Guard Joe Foss Field, Sioux Falls, SD. Remedial Investigation

    DTIC Science & Technology

    1990-09-01

    obtaining a National Pollutant Discharge Elimination System (NPDES) permit relative to the remedial actions for groundwater treatment at Site 1...I Water samples are collected from these locations on a monthly basis and analyzed for conventional, inorganic, and bacteriological pollutants (fecal...cadmium, arsenic, and silver in the Big Sioux River at North Cliff Avenue just below the water treatment plant , approximately 1 mile east of the Base

  18. Lakota Sioux Indian Dance Theatre. Cuesheet for Students.

    ERIC Educational Resources Information Center

    Carr, John C.; And Others

    This performance guide provides students with an introduction to Lakota Sioux history and culture and to the dances performed by the Lakota Sioux Indian Dance Theatre. The Lakota Sioux believe that life is a sacred circle in which all things are connected, and that the circle was broken for them in 1890 by the massacre at Wounded Knee. Only in…

  19. 77 FR 27417 - Foreign-Trade Zone 220-Sioux Falls, SD; Notification of Proposed Production Activity, Rosenbauer...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-10

    ... is used for the production of emergency vehicles and firefighting equipment (pumps, tankers, rescue... drives, DC motors, static converters, rechargeable flashlights, flashlight parts, electrical foam..., LLC, (Emergency Vehicles/Firefighting Equipment), Lyons, SD The Sioux Falls Development Foundation...

  20. 76 FR 38013 - Safety Zone; Big Sioux River From the Military Road Bridge North Sioux City to the Confluence of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-29

    ... rising flood water. Operation in this zone is restricted unless specifically authorized by the Captain of... rising flood water. The impacts on routine navigation are expected to be minimal. Small Entities Under... vessels from destruction, loss or injury due to the hazards associated with rising flood water. If you are...

  1. The Sioux Nation.

    ERIC Educational Resources Information Center

    Archambault, JoAllyn

    Designed as a major supplementary source for social science teachers in elementary and secondary schools, this booklet presents cultural aspects of the Sioux Nation and the history of their dealings with white settlers and the U.S. government. To demonstrate the cultural diversity within one tribal entity, sketches are included of the culture…

  2. 40 CFR 81.85 - Metropolitan Sioux Falls Interstate Air Quality Control Region.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 18 2014-07-01 2014-07-01 false Metropolitan Sioux Falls Interstate... Designation of Air Quality Control Regions § 81.85 Metropolitan Sioux Falls Interstate Air Quality Control Region. The Metropolitan Sioux Falls Interstate Air Quality Control Region (Iowa-South Dakota) has been...

  3. 40 CFR 81.85 - Metropolitan Sioux Falls Interstate Air Quality Control Region.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Metropolitan Sioux Falls Interstate... Designation of Air Quality Control Regions § 81.85 Metropolitan Sioux Falls Interstate Air Quality Control Region. The Metropolitan Sioux Falls Interstate Air Quality Control Region (Iowa-South Dakota) has been...

  4. 40 CFR 81.85 - Metropolitan Sioux Falls Interstate Air Quality Control Region.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 18 2012-07-01 2012-07-01 false Metropolitan Sioux Falls Interstate... Designation of Air Quality Control Regions § 81.85 Metropolitan Sioux Falls Interstate Air Quality Control Region. The Metropolitan Sioux Falls Interstate Air Quality Control Region (Iowa-South Dakota) has been...

  5. 40 CFR 81.85 - Metropolitan Sioux Falls Interstate Air Quality Control Region.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 17 2011-07-01 2011-07-01 false Metropolitan Sioux Falls Interstate... Designation of Air Quality Control Regions § 81.85 Metropolitan Sioux Falls Interstate Air Quality Control Region. The Metropolitan Sioux Falls Interstate Air Quality Control Region (Iowa-South Dakota) has been...

  6. 40 CFR 81.85 - Metropolitan Sioux Falls Interstate Air Quality Control Region.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 18 2013-07-01 2013-07-01 false Metropolitan Sioux Falls Interstate... Designation of Air Quality Control Regions § 81.85 Metropolitan Sioux Falls Interstate Air Quality Control Region. The Metropolitan Sioux Falls Interstate Air Quality Control Region (Iowa-South Dakota) has been...

  7. 40 CFR 81.86 - Metropolitan Sioux City Interstate Air Quality Control Region.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 17 2011-07-01 2011-07-01 false Metropolitan Sioux City Interstate Air... Air Quality Control Regions § 81.86 Metropolitan Sioux City Interstate Air Quality Control Region. The Metropolitan Sioux City Interstate Air Quality Control Region (Iowa-Nebraska-South Dakota) consists of the...

  8. 40 CFR 81.86 - Metropolitan Sioux City Interstate Air Quality Control Region.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Metropolitan Sioux City Interstate Air... Air Quality Control Regions § 81.86 Metropolitan Sioux City Interstate Air Quality Control Region. The Metropolitan Sioux City Interstate Air Quality Control Region (Iowa-Nebraska-South Dakota) consists of the...

  9. 78 FR 41942 - Standing Rock Sioux Tribe; Major Disaster and Related Determinations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-12

    .... FEMA-4123-DR; Docket ID FEMA-2013-0001] Standing Rock Sioux Tribe; Major Disaster and Related... Presidential declaration of a major disaster for the Standing Rock Sioux Tribe (FEMA-4123-DR), dated June 25...''), as follows: I have determined that the damage to the lands associated with the Standing Rock Sioux...

  10. 40 CFR 147.3200 - Fort Peck Indian Reservation: Assiniboine & Sioux Tribes-Class II wells.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...: Assiniboine & Sioux Tribes-Class II wells. 147.3200 Section 147.3200 Protection of Environment ENVIRONMENTAL... INJECTION CONTROL PROGRAMS Assiniboine and Sioux Tribes § 147.3200 Fort Peck Indian Reservation: Assiniboine & Sioux Tribes—Class II wells. The UIC program for Class II injection wells on all lands within the...

  11. 40 CFR 147.3200 - Fort Peck Indian Reservation: Assiniboine & Sioux Tribes-Class II wells.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...: Assiniboine & Sioux Tribes-Class II wells. 147.3200 Section 147.3200 Protection of Environment ENVIRONMENTAL... INJECTION CONTROL PROGRAMS Assiniboine and Sioux Tribes § 147.3200 Fort Peck Indian Reservation: Assiniboine & Sioux Tribes—Class II wells. The UIC program for Class II injection wells on all lands within the...

  12. 40 CFR 147.3200 - Fort Peck Indian Reservation: Assiniboine & Sioux Tribes-Class II wells.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...: Assiniboine & Sioux Tribes-Class II wells. 147.3200 Section 147.3200 Protection of Environment ENVIRONMENTAL... INJECTION CONTROL PROGRAMS Assiniboine and Sioux Tribes § 147.3200 Fort Peck Indian Reservation: Assiniboine & Sioux Tribes—Class II wells. The UIC program for Class II injection wells on all lands within the...

  13. 76 FR 29647 - Safety Zone; Big Rock Blue Marlin Air Show; Bogue Sound, Morehead City, NC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-23

    ...-AA00 Safety Zone; Big Rock Blue Marlin Air Show; Bogue Sound, Morehead City, NC AGENCY: Coast Guard... for the ``Big Rock Blue Marlin Air Show,'' an aerial demonstration to be held over the waters of Bogue... notice of proposed rulemaking (NPRM) entitled Safety Zone; Big Rock Blue Marlin Air Show; Bogue Sound...

  14. 77 FR 13073 - Designation for the Jamestown, ND; Lincoln, NE; Memphis, TN; and Sioux City, IA Areas

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-05

    ... the Jamestown, ND; Lincoln, NE; Memphis, TN; and Sioux City, IA Areas AGENCY: Grain Inspection..., IA areas, Lincoln, Midsouth, and Sioux City, respectively were the sole applicants for designation to.../2015 Midsouth Memphis, TN (901) 942-3216 4/1/2012 3/31/2015 Sioux City Sioux City, IA......... (712...

  15. 76 FR 18672 - Safety Zone; Big Rock Blue Marlin Air Show; Bogue Sound, Morehead City, NC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-05

    ...-AA00 Safety Zone; Big Rock Blue Marlin Air Show; Bogue Sound, Morehead City, NC AGENCY: Coast Guard... Safety Zone for the ``Big Rock Blue Marlin Air Show'', an aerial demonstration to be held over the waters... Register. Basis and Purpose On June 11, 2011 from 7 p.m. to 8 p.m., the Big Rock Blue Marlin Tournament...

  16. Reconnaissance-level assessment of water quality near Flandreau, South Dakota

    USGS Publications Warehouse

    Schaap, Bryan D.

    2002-01-01

    This report presents water-quality data that have been compiled and collected for a reconnaissance-level assessment of water quality near Flandreau, South Dakota. The investigation was initiated as a cooperative effort between the U.S. Geological Survey and the Flandreau Santee Sioux Tribe. Members of the Flandreau Santee Sioux Tribe have expressed concern that Tribal members residing in the city of Flandreau experience more health problems than the general population in the surrounding area. Prior to December 2000, water for the city of Flandreau was supplied by wells completed in the Big Sioux aquifer within the city of Flandreau. After December 2000, water for the city of Flandreau was supplied by the Big Sioux Community Water System from wells completed in the Big Sioux aquifer along the Big Sioux River near Egan, about 8 river miles downstream of Flandreau. There is some concern that the public and private water supplies provided by wells completed in the Big Sioux aquifer near the Big Sioux River may contain chemicals that contribute to the health problems. Data compiled from other investigations provide information about the water quality of the Big Sioux River and the Big Sioux aquifer in the Flandreau area from 1978 through 2001. The median, minimum, and maximum values are presented for fecal bacteria, nitrate, arsenic, and atrazine. Nitrate concentrations of water from Flandreau public-supply wells occasionally exceeded the Maximum Contaminant Level of 10 milligrams per liter for public drinking water. For this study, untreated-water samples were collected from the Big Sioux River in Flandreau and from five wells completed in the Big Sioux aquifer in and near Flandreau. Treated-water samples from the Big Sioux Community Water System were collected at a site about midway between the treatment facility near Egan and the city of Flandreau. The first round of sampling occurred during July 9-12, 2001, and the second round of sampling occurred during August 20

  17. 78 FR 48543 - Notice of Approval of Finding of No Significant Impact-Record of Decision (FONSI/ROD) for Sioux...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-08

    ... Significant Impact--Record of Decision (FONSI/ROD) for Sioux Falls Regional Airport, Sioux Falls, South Dakota... approval of Finding of No Significant Impact--Record of Decision (FONSI/ROD) for proposed development at the Sioux Falls Regional Airport, Sioux Falls, South Dakota. The FAA approved the FONSI/ROD on July 22...

  18. 75 FR 17329 - Safety Zone; Big Bay Fourth of July Fireworks, San Diego Bay, San Diego, CA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-06

    ...-AA00 Safety Zone; Big Bay Fourth of July Fireworks, San Diego Bay, San Diego, CA AGENCY: Coast Guard... safety zone on the navigable waters of the San Diego Bay in support of the Big Bay July Fourth Show to Benefit the San Diego Armed Services YMCA. This temporary safety zone is necessary to provide for the...

  19. The Market Response to the Sioux City DC-10 Crash.

    PubMed

    Barnett, Arnold; Menighetti, John; Prete, Matthew

    1992-03-01

    The 1989 DC-10 crash at Sioux City, Iowa presented a rare instance in which a potential threat to safety was both (i) intensely publicized over a short period and (ii) also amenable to the unobtrusive measurement of the market reaction it evoked. As such, it allowed a useful case study of the extent and duration of behavior change caused by a frightening event. Using reservations data from travel agencies in five states, this paper estimates the short-term effects of the Sioux City crash on passenger willingness to fly the DC-10. The data suggest that, in the first few weeks after the crash, more than one third of travelers who would normally have booked DC-10 flights chose instead to fly other aircraft. Within 2 months of the disaster, however, DC-10 bookings rebounded to within 10% of the level that would have been expected had the Sioux City crash not occurred. At no time, apparently, did the airlines that operate DC-10s use their "yield-management" computer pricing systems unofficially to lower DC-10 fares relative to those on other types of plane.

  20. Narrow hybrid zone between two subspecies of big sagebrush (Artemisia tridentata: Asteraceae): XI. Plant-insect interactions in reciprocal transplant gardens

    Treesearch

    John H. Graham; E. Durant McArthur; D. Carl Freeman

    2001-01-01

    Basin big sagebrush (Artemisia tridentata ssp. tridentata) and mountain big sagebrush (A. t. ssp. vaseyana) hybridize in a narrow zone near Salt Creek, Utah. Reciprocal transplant experiments in this hybrid zone demonstrate that hybrids are more fit than either parental subspecies, but only in the hybrid zone. Do hybrids experience greater, or lesser, use by...

  1. Viral metagenomics of aphids present in bean and maize plots on mixed-use farms in Kenya reveals the presence of three dicistroviruses including a novel Big Sioux River virus-like dicistrovirus.

    PubMed

    Wamonje, Francis O; Michuki, George N; Braidwood, Luke A; Njuguna, Joyce N; Musembi Mutuku, J; Djikeng, Appolinaire; Harvey, Jagger J W; Carr, John P

    2017-10-02

    Aphids are major vectors of plant viruses. Common bean (Phaseolus vulgaris L.) and maize (Zea mays L.) are important crops that are vulnerable to aphid herbivory and aphid-transmitted viruses. In East and Central Africa, common bean is frequently intercropped by smallholder farmers to provide fixed nitrogen for cultivation of starch crops such as maize. We used a PCR-based technique to identify aphids prevalent in smallholder bean farms and next generation sequencing shotgun metagenomics to examine the diversity of viruses present in aphids and in maize leaf samples. Samples were collected from farms in Kenya in a range of agro-ecological zones. Cytochrome oxidase 1 (CO1) gene sequencing showed that Aphis fabae was the sole aphid species present in bean plots in the farms visited. Sequencing of total RNA from aphids using the Illumina platform detected three dicistroviruses. Maize leaf RNA was also analysed. Identification of Aphid lethal paralysis virus (ALPV), Rhopalosiphum padi virus (RhPV), and a novel Big Sioux River virus (BSRV)-like dicistrovirus in aphid and maize samples was confirmed using reverse transcription-polymerase chain reactions and sequencing of amplified DNA products. Phylogenetic, nucleotide and protein sequence analyses of eight ALPV genomes revealed evidence of intra-species recombination, with the data suggesting there may be two ALPV lineages. Analysis of BSRV-like virus genomic RNA sequences revealed features that are consistent with other dicistroviruses and that it is phylogenetically closely related to dicistroviruses of the genus Cripavirus. The discovery of ALPV and RhPV in aphids and maize further demonstrates the broad occurrence of these dicistroviruses. Dicistroviruses are remarkable in that they use plants as reservoirs that facilitate infection of their insect replicative hosts, such as aphids. This is the first report of these viruses being isolated from either organism. The BSRV-like sequences represent a potentially novel

  2. Virginia Driving Hawk Sneve, Sioux Author. With Teacher's Guide. Native Americans of the Twentieth Century.

    ERIC Educational Resources Information Center

    Minneapolis Public Schools, MN.

    A biography for elementary school students describes the life and career of Virginia Driving Hawk Sneve (Sioux), a Native American free-lance writer, and includes her photograph and a map of South Dakota reservations. A story by Mrs. Sneve tells about a half-Sioux boy who confronts his heritage when his grandfather makes a long journey between his…

  3. Intersection of economics, history, and human biology: secular trends in stature in nineteenth-century Sioux Indians.

    PubMed

    Prince, J M

    1995-06-01

    An unusual confluence of historical factors may be responsible for nineteenth-century Sioux being able to sustain high statures despite enduring adverse conditions during the early reservation experience. An exceptionally long span of Dakota Sioux history was examined for secular trends using a cross-sectional design. Two primary sources were used: One anthropometric data set was collected in the late nineteenth century under the direction of Franz Boas, and another set was collected by James R. Walker in the early twentieth century. Collectively, the data represent the birth years between 1820 and 1880 for adult individuals 20 years old or older. Adult heights (n = 1197) were adjusted for aging effects and regressed on age, with each data set and each sex analyzed separately. Tests for differences between the adult means of age cohorts by decade of birth (1820-1880) were also carried out. Only one sample of adults showed any convincing secular trend (p < 0.05): surprisingly, a positive linear trend for Walker's sample of adult males. This sample was also the one sample of adults that showed significant differences between age cohorts. The failure to find any negative secular trend in this population of Amerindians is remarkable, given the drastic socioeconomic changes that occurred with the coming of the reservation period (ca. 1868). Comparisons with contemporary white Americans show that the Sioux remained consistently taller than whites well into the reservation period and that Sioux children (Prince 1989) continued to grow at highly favorable rates during this time of severe conditions. A possible explanation for these findings involves the relatively favorable level of subsistence support received by most of the Sioux from the US government, as stipulated by various treaties. Conservative estimates suggest that the Sioux may have been able to sustain net levels of per capita annual meat consumption that exceeded the US average for several years before 1893.

  4. View of EPA Farm Sioux silo, facing east. Radsafe trailer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of EPA Farm Sioux silo, facing east. Rad-safe trailer is to the left - Nevada Test Site, Environmental Protection Agency Farm, Silo Type, Area 15, Yucca Flat, 10-2 Road near Circle Road, Mercury, Nye County, NV

  5. The community-driven BiG CZ software system for integration and analysis of bio- and geoscience data in the critical zone

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Mayorga, E.; Horsburgh, J. S.; Lehnert, K. A.; Zaslavsky, I.; Valentine, D. W., Jr.; Richard, S. M.; Cheetham, R.; Meyer, F.; Henry, C.; Berg-Cross, G.; Packman, A. I.; Aronson, E. L.

    2014-12-01

    Here we present the prototypes of a new scientific software system designed around the new Observations Data Model version 2.0 (ODM2, https://github.com/UCHIC/ODM2) to substantially enhance integration of biological and Geological (BiG) data for Critical Zone (CZ) science. The CZ science community takes as its charge the effort to integrate theory, models and data from the multitude of disciplines collectively studying processes on the Earth's surface. The central scientific challenge of the CZ science community is to develop a "grand unifying theory" of the critical zone through a theory-model-data fusion approach, for which the key missing need is a cyberinfrastructure for seamless 4D visual exploration of the integrated knowledge (data, model outputs and interpolations) from all the bio and geoscience disciplines relevant to critical zone structure and function, similar to today's ability to easily explore historical satellite imagery and photographs of the earth's surface using Google Earth. This project takes the first "BiG" steps toward answering that need. The overall goal of this project is to co-develop with the CZ science and broader community, including natural resource managers and stakeholders, a web-based integration and visualization environment for joint analysis of cross-scale bio and geoscience processes in the critical zone (BiG CZ), spanning experimental and observational designs. We will: (1) Engage the CZ and broader community to co-develop and deploy the BiG CZ software stack; (2) Develop the BiG CZ Portal web application for intuitive, high-performance map-based discovery, visualization, access and publication of data by scientists, resource managers, educators and the general public; (3) Develop the BiG CZ Toolbox to enable cyber-savvy CZ scientists to access BiG CZ Application Programming Interfaces (APIs); and (4) Develop the BiG CZ Central software stack to bridge data systems developed for multiple critical zone domains into a single

  6. Learning Partnerships: Lessons from the Lakota Sioux of Rosebud.

    ERIC Educational Resources Information Center

    Leigh, Katharine E.; Tee, Effie

    1997-01-01

    Collaboration between Native and non-Native American faculty and students from the University of Oklahoma's Department of Architecture, members of the American Indian Council of Architects and Engineers (AICAE), and members of the Rosebud Sioux (South Dakota) generated reservation housing proposals that would meet family and environmental needs…

  7. 78 FR 39820 - Standing Rock Sioux Tribe Disaster #SD-00058

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-02

    ... SMALL BUSINESS ADMINISTRATION [Disaster Declaration 13639 and 13640] Standing Rock Sioux Tribe... the Presidential declaration of a major disaster for Public Assistance Only for the Standing Rock...: Standing Rock Indian Reservation. The Interest Rates are: Percent For Physical Damage: Non-Profit...

  8. 75 FR 73981 - Fisheries of the Exclusive Economic Zone Off Alaska; Big Skate in the Central Regulatory Area of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-30

    .... 0910131362-0087-02] RIN 0648-XA066 Fisheries of the Exclusive Economic Zone Off Alaska; Big Skate in the... prohibiting retention of big skate in the Central Regulatory Area of the Gulf of Alaska (GOA). This action is necessary because the 2010 total allowable catch (TAC) of big skate in the Central Regulatory Area of the...

  9. 78 FR 27863 - Fisheries of the Exclusive Economic Zone Off Alaska; Big Skate in the Central Regulatory Area of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-13

    .... 120918468-3111-02] RIN 0648-XC673 Fisheries of the Exclusive Economic Zone Off Alaska; Big Skate in the... prohibiting retention of big skate in the Central Regulatory Area of the Gulf of Alaska (GOA). This action is necessary because the 2013 total allowable catch of big skate in the Central Regulatory Area of the GOA has...

  10. 77 FR 75399 - Fisheries of the Exclusive Economic Zone Off Alaska; Big Skate in the Central Regulatory Area of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-20

    .... 111207737-2141-02] RIN 0648-XC405 Fisheries of the Exclusive Economic Zone Off Alaska; Big Skate in the... prohibiting retention of big skate in the Central Regulatory Area of the Gulf of Alaska (GOA). This action is necessary because the 2012 total allowable catch of big skate in the Central Regulatory Area of the GOA has...

  11. 1. Overview of EPA Farm Lab Building 1506, Sioux silo ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Overview of EPA Farm Lab Building 15-06, Sioux silo and slaughter addition (featuring poles from hay shed), facing east-northeast. - Nevada Test Site, Environmental Protection Agency Farm, Area 15, Yucca Flat, 10-2 Road near Circle Road, Mercury, Nye County, NV

  12. Dr. Charles Alexander Eastman, Sioux Physician-Author, 1858-1939. With Teacher's Guide. Native Americans of the Twentieth Century.

    ERIC Educational Resources Information Center

    Minneapolis Public Schools, MN.

    A biography for elementary school students of a 19th century American Indian physician and author, Charles Alexander Eastman (Sioux), includes photographs of Dr. Eastman and his wife. A teacher's guide following the bibliography contains information on the Sioux Uprising of 1862 and the Wounded Knee Massacre, learning objectives and directions for…

  13. Optimization Evaluation, General Motors Former AC Rochester Facility, Sioux City, Iowa

    EPA Pesticide Factsheets

    The General Motors (GM) Former AC Rochester Facility (site) is located within the valley of the Missouri River in Sioux City, Iowa and is bounded by a steep loess bluff to the north, commercial properties to the east, and undeveloped properties to the...

  14. 78 FR 39610 - Safety Zone; Big Bay Boom, San Diego Bay; San Diego, CA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-02

    ..., Protection of Children from Environmental Health Risks and Safety Risks. This rule is not an economically significant rule and does not create an environmental risk to health or risk to safety that may...-AA00 Safety Zone; Big Bay Boom, San Diego Bay; San Diego, CA AGENCY: Coast Guard, DHS. ACTION...

  15. Development of the renewal on the Standing Rock Sioux Reservation Project

    USDA-ARS?s Scientific Manuscript database

    The Standing Rock Sioux Reservation is comprised of 2.3 million acres of primarily rangeland that straddle the North Dakota – South Dakota border. Many of its inhabitants face issues with unemployment and dietary problems. In addition, there are concerns about the management of its natural resourc...

  16. Child Socialization among Native Americans: The Lakota (Sioux) in Cultural Context.

    ERIC Educational Resources Information Center

    Medicine, Beatrice

    1985-01-01

    Child socialization research among American Indians must account for tribal differences, examining gender roles in a given tribal culture, and employing studies of enculturation and acculturation, life histories, and ethnographies. Child socialization in the Teton Sioux or Lakota tribe can be used to illustrate these research techniques. The…

  17. The Omaha Dance in Oglala and Sicangu Sioux History, 1883-1923.

    ERIC Educational Resources Information Center

    Thiel, Mark G.

    Although altered by government and religious sanctions since the establishment of reservations in 1878, the Omaha dance still serves as an obtrusive demonstration of tribal identity and cohesion for the Oglala and Sicangu Sioux. The dance achieved a high level of prominence as a successful celebration for petitioning supernatural protection in…

  18. Alienation and Achievement Among Oglala Sioux Secondary School Students. Final Report.

    ERIC Educational Resources Information Center

    Spilka, Bernard

    As a final report on alienation and achievement among 753 Oglala Sioux secondary school students on the Pine Ridge Reservation, this document attempts to portray the circumstances affecting the Indian child in school. To provide a basis for comparison, the sample also contained 855 white secondary school pupils. General findings which are believed…

  19. MERCURY RISK MANAGEMENT IN LIVESTOCK PONDS ON THE CHEYENNE RIVER SIOUX RESERVATION

    EPA Science Inventory

    In a prior collaborative 3 year study with the Cheyenne River Sioux Tribe Department of Environmental Protection (CRST DEP), and the Agencies' Environmental Response Team, RegionVIII investigated Hg levels in fish tissues from the Cheyenne River and Lake Oahe in South Dakota. In...

  20. The "Fighting Sioux" Conflict: Lessons on Social Justice for Higher Education

    ERIC Educational Resources Information Center

    Phillips, Amy; Rice, Dan

    2010-01-01

    Conflict over the University of North Dakota's (UND) "Fighting Sioux" logo and nickname has been protracted and bitter, lasting over 40 years. This article presents four explanations for UND's status as one of the last universities to maintain a Native American nickname and logo: the dynamics of racism, the power of booster culture,…

  1. Revised Constitution and Bylaws of the Sisseton-Wahpeton Sioux Tribe, South Dakota.

    ERIC Educational Resources Information Center

    Sisseton-Wahpeton Sioux Tribe, Inc., SD.

    As stated in the Preamble, the Sisseton-Wahpeton Sioux Tribe has established this "Revised Constitution and Bylaws" in order to "form a better tribal government, exercise tribal rights and responsibilities and promote the welfare of the people". This "Revised Constitution" consists of 11 Articles which are identified…

  2. Local Heroes: Three Members of the Rosebud Sioux Tribe Lead by Example.

    ERIC Educational Resources Information Center

    Haase, Eric; Soldier, Lydia Whirlwind

    1993-01-01

    Provides the personal narratives of three Rosebud Sioux tribal leaders: Geraldine Ancoren, a mother of 7 who has worked for the tribe in various departments for 16 years; Ned Metcalf, who organized several development projects for his extended family; and Duane Hollow Horn Bear, a Lakota Studies instructor at Sinte Gleska University and member of…

  3. Patriarchy and the "Fighting Sioux": A Gendered Look at Racial College Sports Nicknames

    ERIC Educational Resources Information Center

    Williams, Dana M.

    2006-01-01

    The use of Native American nicknames and symbols by US college athletics is a long-standing practice that embodies various forms of authoritarian oppression. One type of authoritarianism is that of patriarchy and it has been present in the struggle over the nickname at the University of North Dakota, the "Fighting Sioux". This article…

  4. Indianness, Sex, and Grade Differences on Behavior and Personality Measures Among Oglala Sioux Adolescents

    ERIC Educational Resources Information Center

    Cress, Joseph N.; O'Donnell, James P.

    1974-01-01

    This study assesses Indianness (mixed or full-blood), sex, and grade differences among Oglala Sioux high school students, using the Coopersmith Behavior Rating Forms and the Quay-Peterson Behavior Problem Checklist. Results indicate that mixed-bloods had higher achievement and greater popularity than full-bloods. Fewer problems and higher…

  5. Wood duck brood movements and habitat use on prairie rivers in South Dakota

    USGS Publications Warehouse

    Granfors, D.A.; Flake, Lester D.

    1999-01-01

    Wood duck (Aix sponsa) populations have been increasing in the Central Flyway, but little is known about wood duck brood rearing in prairie ecosystems. We compared movements and habitat use of radiomarked female wood ducks with broods in South Dakota on 2 rivers with contrasting prairie landscapes. The perennial Big Sioux River had a broad floodplain and riparian forest, whereas the intermittent Maple River had emergent vegetation along the river channel. Movements between nest sites and brood-rearing areas were longer on the Maple River than on the Big Sioux River (P = 0.02) and were among the longest reported for wood duck broods. Movements on the Big Sioux River were longer in 1992 (P = 0.01), when the floodplain was dry, than in 1993 or 1994. Before flooding occurred on the Big Sioux River, broods used semipermanent wetlands and tributaries outside the floodplain; thereafter, females selected forested wetlands along the river. Broods on the Maple River used emergent vegetation along the river channel throughout the study. Because median length of travel to brood-rearing areas was 2-3 km we recommend maintenance of brood-rearing habitat every 3-5 km along prairie rivers. Wildlife managers should encourage landowners to retain riparian vegetation along perennial rivers and emergent vegetation along intermittent streams to provide brood-rearing habitat during wet and dry cycles.

  6. PROTECTING HUMAN HEALTH AND THE ENVIRONMENT ON SIOUX TRIBAL LANDS: A PARTNERSHIP OF EPA AND TRIBAL EPD

    EPA Science Inventory

    Through environmental sampling performed by EPA and Cheyenne River Sioux Tribe Environmental Protection Division personnel, mercury contamination in managed pond systems in South Dakota was characterized and risk reduction recommendations were made to protect subsistence fisherma...

  7. Black Elk Speaks. Being the Life Story of a Holy Man of the Oglala Sioux.

    ERIC Educational Resources Information Center

    Neihardt, John G.

    This classic book describes the life experiences and "great vision" of Black Elk, a holy man of the Oglala Sioux. Black Elk imparted these things to John Neihardt so that he might save them for future generations. Black Elk's power-vision occurred when he was 9 years old during a sickness. The lengthy vision contained profound symbolism…

  8. Health assessment for Vogel Paint and Wax, Maurice, Sioux County, Iowa, Region 7. CERCLIS No. IAD980630487. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1989-04-29

    The Vogel Paint and Wax National Priority List site is situated in northwest Iowa in Sioux County. Contaminants found at the site consist of heavy metals (particularly cadmium, chromium, lead, and mercury) and volatile organic compounds (benzene, ethylbenzene, methyl ethyl ketone, toluene, and xylene). Two towns, Maurice and Struble, and the Southern Sioux County Rural Water System well field are located within three miles of the site, and two families live within 1600 feet of the waste-disposal site. Environmental pathways include contaminated soil and ground water, as well as potential surface water and air contamination. Although there does not appearmore » to be any immediate public health threat, the site is of potential health concern because of the possibility for further off-site migration of contaminants into the ground water aquifer and for direct on-site contact.« less

  9. Prevalence of anemia in First Nations children of northwestern Ontario.

    PubMed Central

    Whalen, E. A.; Caulfield, L. E.; Harris, S. B.

    1997-01-01

    OBJECTIVE: To estimate the prevalence of anemia among First Nations children of northwestern Ontario. DESIGN: Retrospective review of all hemoglobin determinations between 1990 and 1992 in the Sioux Lookout Zone. SETTING: The Sioux Lookout Zone Hospital, a secondary care referral hospital for 28 remote First Nations communities in northwestern Ontario, affiliated with the University of Toronto's Sioux Lookout Program. PARTICIPANTS: All First Nations children age 3 to 60 months who had produced venipuncture or fingerprick blood samples between 1990 and 1992 (614 children had a total of 1223 hemoglobin determinations). MAIN OUTCOME MEASURES: Prevalence of anemia by age, sex, geographical location, and diagnosis. Anemia was defined as a hemoglobin value less than 110g/L. RESULTS: Prevalence of anemia peaked in the age range of 6 to 24 months with prevalence rates of 51.7% to 79.3%. Conditions most commonly associated with anemia were respiratory tract infections. Children living in communities in the western part of the Sioux Lookout Zone were 1.64 times more likely to have anemia (95% confidence interval 1.15, 2.35) than children in the other communities. CONCLUSIONS: Anemia appears to be a serious public health problem among preschool children in the Sioux Lookout Zone. PMID:9111982

  10. MERCURY CONTAMINATION OF SUBSISTENCE FISHERIES ON TRIBAL LANDS: A PARTNERSHIP OF ORD, REGION 8 AND CHEYENNE RIVER SIOUX RESERVATION

    EPA Science Inventory

    In a prior collaborative 3 year study with the Cheyenne River Sioux Tribe Department of Environmental Protection (CRST DEP), and the Agencies' Environmental Response Team, RegionVIII investigated Hg levels in fish tissues from the Cheyenne River and Lake Oahe in South Dakota. In...

  11. "I Am Not a Fairy Tale": Contextualizing Sioux Spirituality and Story Traditions in Susan Power's "The Grass Dancer"

    ERIC Educational Resources Information Center

    Diana, Vanessa Holford

    2009-01-01

    Standing Rock Sioux writer Susan Power's best-selling novel "The Grass Dancer" (1994) includes depictions of the supernatural and spiritual that do not conform to the Judeo-Christian or, in some cases, the atheist or rationalist worldviews of many readers. Power writes of ghost characters and haunted places, communication between the living and…

  12. Major General George Crook’s Use of Counterinsurgency Compound Warfare during the Great Sioux War of 1876-77

    DTIC Science & Technology

    2008-06-13

    to secure the western flank along a parallel ridge to support the scouts and infantrymen who were charging against the Sioux. Two more battalions...that he was a native of the Sandwich Islands and was the son of a Mormon missionary and an island noblewoman. More likely, he was the son 60 of

  13. Medical Teaching in Sioux Lookout: Primary Health Care in a Cross-Cultural Setting

    PubMed Central

    Hagen, Catherine; Casson, Ian; Wilson, Ruth

    1989-01-01

    When participating in health care in northern Native communities, physician-teachers are challenged to understand community development, treat diverse manifestations of illness and socio-cultural strain, and provide opportunities for students and residents to learn the skills, knowledge, and attitudes that will promote the health of Native people and that will develop the students' own education. The University of Toronto Sioux Lookout Program includes a teaching practice with the goals of service, teaching, and research that provides care and promotes health for 13 000 Ojibway- or Cree-speaking aboriginal Canadians in northwestern Ontario. Knowledge gained in this setting about broad determinants of health, communication skills, and clinical decision making can be generalized to other practices. PMID:21249082

  14. Gravity and Aeromagnetic Gradients within the Yukon-Tanana Upland, Black Mountain Tectonic Zone, Big Delta Quadrangle, east-central Alaska

    USGS Publications Warehouse

    Saltus, R.W.; Day, W.C.

    2006-01-01

    The Yukon-Tanana Upland is a complex composite assemblage of variably metamorphosed crystalline rocks with strong North American affinities. At the broadest scale, the Upland has a relatively neutral magnetic character. More detailed examination, however, reveals a fundamental northeast-southwest-trending magnetic gradient, representing a 20-nT step (as measured at a flight height of 300 m) with higher values to the northwest, that extends from the Denali fault to the Tintina fault and bisects the Upland. This newly recognized geophysical gradient is parallel to, but about 100 km east of, the Shaw Creek fault. The Shaw Creek fault is mapped as a major left-lateral, strike-slip fault, but does not coincide with a geophysical boundary. A gravity gradient coincides loosely with the southwestern half of the magnetic gradient. This gravity gradient is the eastern boundary of a 30-mGal residual gravity high that occupies much of the western and central portions of the Big Delta quadrangle. The adjacent lower gravity values to the east correlate, at least in part, with mapped post-metamorphic granitic rocks. Ground-based gravity and physical property measurements were made in the southeastern- most section of the Big Delta quadrangle in 2004 to investigate these geophysical features. Preliminary geophysical models suggest that the magnetic boundary is deeper and more fundamental than the gravity boundary. The two geophysical boundaries coincide in and around the Tibbs Creek region, an area of interest to mineral exploration. A newly mapped tectonic zone (the Black Mountain tectonic zone of O'Neill and others, 2005) correlates with the coincident geophysical boundaries.

  15. Big Sky and Greenhorn Drilling Area on Mount Sharp

    NASA Image and Video Library

    2015-12-17

    This view from the Mast Camera (Mastcam) on NASA's Curiosity Mars rover covers an area in "Bridger Basin" that includes the locations where the rover drilled a target called "Big Sky" on the mission's Sol 1119 (Sept. 29, 2015) and a target called "Greenhorn" on Sol 1137 (Oct. 18, 2015). The scene combines portions of several observations taken from sols 1112 to 1126 (Sept. 22 to Oct. 6, 2015) while Curiosity was stationed at Big Sky drilling site. The Big Sky drill hole is visible in the lower part of the scene. The Greenhorn target, in a pale fracture zone near the center of the image, had not yet been drilled when the component images were taken. Researchers selected this pair of drilling sites to investigate the nature of silica enrichment in the fracture zones of the area. http://photojournal.jpl.nasa.gov/catalog/PIA20270

  16. The vegetation of the Grand River/Cedar River, Sioux, and Ashland Districts of the Custer National Forest: a habitat type classification.

    Treesearch

    Paul L. Hansen; George R. Hoffman

    1988-01-01

    A vegetation classification was developed, using the methods and concepts of Daubenmire, on the Ashland, Sioux, and Grand River/Cedar River Districts of the Custer National Forest. Of the 26 habitat types delimited and described, eight were steppe, nine shrub-steppe, four woodland, and five forest. Two community types also were described. A key to the habitat types and...

  17. Water resources of the Big Sioux River Valley near Sioux Falls, South Dakota

    USGS Publications Warehouse

    Jorgensen, Donald G.; Ackroyd, Earl A.

    1973-01-01

    Water from the river is generally less mineralized, softer, and easier to treat than ground water. Water pumped from wells near the river is similar in quality to the river water, but does not have the objectionable odors or tastes often present in water from the river.

  18. Analyzing big data with the hybrid interval regression methods.

    PubMed

    Huang, Chia-Hui; Yang, Keng-Chieh; Kao, Han-Ying

    2014-01-01

    Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.

  19. Analyzing Big Data with the Hybrid Interval Regression Methods

    PubMed Central

    Kao, Han-Ying

    2014-01-01

    Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes. PMID:25143968

  20. 76 FR 15936 - Opportunity for Designation in the Aberdeen, SD; Decatur, IL; Hastings, NE; Fulton, IL; the State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-22

    ... regular business hours (7 CFR 1.27(c)). FOR FURTHER INFORMATION CONTACT: Karen W. Guagliardo, 202-720-8262... State line east; Bounded on the East by the eastern South Dakota State line (the Big Sioux River) to... contiguous [[Page 15938

  1. Use of distance measures to assess environmental and genetic variability across sagebrush hybrid zones

    Treesearch

    D. Carl Freeman; John H. Graham; Terra Jones; Han Wang; Kathleen J. Miglia; E. Durant McArthur

    2001-01-01

    Reciprocal transplant studies in the big sagebrush hybrid zone at Salt Creek Canyon, Utah, showed that hybrids between basin (Artemisia tridentata ssp. tridentata) and mountain big sagebrush (A. t. ssp. vaseyana) are more fit than either parental taxon, but only when raised in the hybrid zone. Hybrids are less fit than the native parent when raised in the parental...

  2. Big Sky and Greenhorn Elemental Comparison

    NASA Image and Video Library

    2015-12-17

    NASA's Curiosity Mars rover examined both the "Greenhorn" and "Big Sky" targets with the rover's Alpha Particle X-ray Spectrometer (APXS) instrument. Greenhorn is located within an altered fracture zone and has an elevated concentration of silica (about 60 percent by weight). Big Sky is the unaltered counterpart for comparison. The bar plot on the left shows scaled concentrations as analyzed by Curiosity's APXS. The bar plot on the right shows what the Big Sky composition would look like if silica (SiO2) and calcium-sulfate (both abumdant in Greenhorn) were added. The similarity in the resulting composition suggests that much of the chemistry of Greenhorn could be explained by the addition of silica. Ongoing research aims to distinguish between that possible explanation for silicon enrichment and an alternative of silicon being left behind when some other elements were removed by acid weathering. http://photojournal.jpl.nasa.gov/catalog/PIA20275

  3. A Critical Review of Ann Rinaldi's "My Heart Is on the Ground: The Diary of Nannie Little Rose, A Sioux Girl, Carlisle Indian School, Pennsylvania, 1880."

    ERIC Educational Resources Information Center

    Reese, Debby; Slapin, Beverly; Landis, Barb; Atleo, Marlene; Caldwell, Naomi; Mendoza, Jean; Miranda, Deborah; Rose, La Vera; Smith, Cynthia

    This paper critically reviews the book, "My Heart Is On the Ground: The Diary of Nannie Little Rose, a Sioux Girl, Carlisle Indian School, 1800." The review begins with a profile of Captain Richard Henry Pratt who founded the Carlisle (Pennsylvania) Indian Industrial School in 1879. Pratt's philosophy was to "kill the Indian and…

  4. The medicine wheel nutrition intervention: a diabetes education study with the Cheyenne River Sioux Tribe.

    PubMed

    Kattelmann, Kendra K; Conti, Kibbe; Ren, Cuirong

    2009-09-01

    The Northern Plains Indians of the Cheyenne River Sioux Tribe have experienced significant lifestyle and dietary changes over the past seven generations that have resulted in increased rates of diabetes and obesity. The objective of this study was to determine if Northern Plains Indians with type 2 diabetes mellitus who are randomized to receive culturally adapted educational lessons based on the Medicine Wheel Model for Nutrition in addition to their usual dietary education will have better control of their type 2 diabetes than a nonintervention, usual care group who received only the usual dietary education from their personal providers. A 6-month, randomized, controlled trial was conducted January 2005 through December 2005, with participants randomized to the education intervention or usual care control group. The education group received six nutrition lessons based on the Medicine Wheel Model for Nutrition. The usual care group received the usual dietary education from their personal providers. One hundred fourteen Northern Plains Indians from Cheyenne River Sioux Tribe aged 18 to 65 years, with type 2 diabetes. Weight, body mass index (BMI), hemoglobin A1c, fasting serum glucose and lipid parameters, circulating insulin, and blood pressure were measured at the beginning and completion. Diet histories, physical activity, and dietary satiety surveys were measured at baseline and monthly through completion. Differences were determined using Student t tests, chi(2) tests, and analysis of variance. The education group had a significant weight loss (1.4+/-0.4 kg, P

  5. Big Opportunities and Big Concerns of Big Data in Education

    ERIC Educational Resources Information Center

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  6. Sioux City Riverbank Filtration Study

    NASA Astrophysics Data System (ADS)

    Mach, R.; Condon, J.; Johnson, J.

    2003-04-01

    The City of Sioux City (City) obtains a large percentage of their drinking water supply from both a horizontal collector well system and vertical wells located adjacent to the Missouri River. These wells are set in either the Missouri Alluvium or the Dakota Sandstone aquifer. Several of the collector well laterals extend out beneath the Missouri River, with the laterals being over twenty feet below the river channel bottom. Due to concerns regarding ground water under direct surface water influence, the Iowa Department of Natural Resources (IDNR) required the City to expand their water treatment process to deal with potential surface water contaminant issues. With the extensive cost of these plant upgrades, the City and Olsson Associates (OA) approached the IDNR requesting approval for assessing the degree of natural riverbank filtration for water treatment. If this natural process could be ascertained, the level of treatment from the plant could be reduced. The objective of this study was to quantify the degree of surface water (i.e. Missouri River) filtration due to the underlying Missouri River sediments. Several series of microscopic particulate analysis where conducted, along with tracking of turbidity, temperature, bacteria and a full scale particle count study. Six particle sizes from six sampling points were assessed over a nine-month period that spanned summer, fall and spring weather periods. The project was set up in two phases and utilized industry accepted statistical analyses to identify particle data trends. The first phase consisted of twice daily sample collection from the Missouri River and the collector well system for a one-month period. Statistical analysis of the data indicated reducing the sampling frequency and sampling locations would yield justifiable data while significantly reducing sampling and analysis costs. The IDNR approved this modification, and phase II included sampling and analysis under this reduced plant for an eight

  7. Compilation of Data to Support Development of a Pesticide Management Plan by the Yankton Sioux Tribe, Charles Mix County, South Dakota

    USGS Publications Warehouse

    Schaap, Bryan D.

    2004-01-01

    The U.S. Environmental Protection Agency is working with the Yankton Sioux Tribe to develop a pesticide management plan to reduce potential for contamination of ground water that may result from the use of registered pesticides. The purpose of this study was to compile technical information to support development of a pesticide management plan by the Yankton Sioux Tribe for the area within the Yankton Sioux Reservation, Charles Mix County, South Dakota. Five pesticides (alachlor, atrazine, cyanazine, metolachlor, and simazine) were selected by the U.S. Environmental Protection Agency for the management plan approach because they had been identified as probable or possible human carcinogens and they often had been associated with ground-water contamination in many areas and at high concentrations. This report provides a compilation of data to support development of a pesticide management plan. Available data sets are summarized in the text of this report, and actual data sets are provided in one Compact Disk?Read-Only Memory that is included with the report. The compact disk contains data sets pertinent to the development of a pesticide management plan. Pesticide use for the study area is described using information from state and national databases. Within South Dakota, pesticides commonly are applied to corn and soybean crops, which are the primary row crops grown in the study area. Water-quality analyses for pesticides are summarized for several surface-water sites. Pesticide concentrations in most samples were found to be below minimum reporting levels. Topographic data are presented in the form of 30-meter digital elevation model grids and delineation of drainage basins. Geohydrologic data are provided for the surficial deposits and the bedrock units. A high-resolution (30-by-30 meters) land-cover and land-use database is provided and summarized in a tabular format. More than 91 percent of the study area is used for row crops, pasture, or hay, and almost 6

  8. The Big Mountain oil field, Ventura, California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, E.A.

    1967-06-02

    The Big Mt. oil field is believed to be primarily a fault trap accumulation. All faults are hidden beneath the unconformity at the base of the Vaqueros Formation, or die out before reaching the surface. The Sespe Formation is divided into an upper sandy unit, a middle alternating sand and shale unit, and a lower sandy unit in the Big Mt. area. The producing zone is in the upper portion of the lower sandy unit. The sands are soft and friable, medium to coarse, and the reservoir characteristics are relatively good. The geology is similar to that observed in manymore » Sespe fields, such as the Oxnard and Montalvo oil fields, and the SE. portion of the South Mt. oil field. These fields are broken into many fault blocks and different production characteristics in each block. Good wells and poor ones are interspersed, with good wells downdip from poor ones and visa versa. The Big Mt. field is shallow, drilling and production costs are relatively inexpensive, and there is no royalty burden on the oil.« less

  9. Responses of experimental river corridors to engineered log jams

    USDA-ARS?s Scientific Manuscript database

    Physical models of the Big Sioux River, SD, were constructed to assess the impact on flow, drag, and bed erosion and deposition in response to the installation of two different types of engineered log jams (ELJs). A fixed-bed model focused on flow velocity and forces acting on an instrumented ELJ, a...

  10. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    ERIC Educational Resources Information Center

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  11. Water resources of Lincoln and Union counties, South Dakota

    USGS Publications Warehouse

    Niehus, C.A.

    1994-01-01

    Water resources of Lincoln and Union Counties occur as surface water in streams and lakes and ground water in ten major glacial and one major bedrock aquifers. The major surface-water sources are the Missouri and Big Sioux Rivers. Glacial aquifers contain about 4 million acre-feet of water in storage; 1.5 million acre-feet are contained in the Missouri aquifer. The Wall Lake, Shindler, and Upper Vemillion-Missouri aquifers are deeply buried, confined aquifers with average thicknesses ranging from 31 to 41 feet. The Harrisburg and Big Sioux aquifers are shallow, water-table aquifers with average thicknesses of 26 and 28 feet, respectively. The Parker-Centerville, Newton Hills, and Brule Creek aquifers are buried, confined aquifers with average thicknesses ranging from 33 to 36 feet. The Lower Vermillion-Missouri aquifer is a buried, confined aquifer with an average thickness of 99 feet. The Missouri aquifer is confined in the northeastern portion of the aquifer and is a shallow, water-table aquifer elsewhere with average cumulative thickness of 84 feet.

  12. Deep mixing of 3He: reconciling Big Bang and stellar nucleosynthesis.

    PubMed

    Eggleton, Peter P; Dearborn, David S P; Lattanzio, John C

    2006-12-08

    Low-mass stars, approximately 1 to 2 solar masses, near the Main Sequence are efficient at producing the helium isotope 3He, which they mix into the convective envelope on the giant branch and should distribute into the Galaxy by way of envelope loss. This process is so efficient that it is difficult to reconcile the low observed cosmic abundance of 3He with the predictions of both stellar and Big Bang nucleosynthesis. Here we find, by modeling a red giant with a fully three-dimensional hydrodynamic code and a full nucleosynthetic network, that mixing arises in the supposedly stable and radiative zone between the hydrogen-burning shell and the base of the convective envelope. This mixing is due to Rayleigh-Taylor instability within a zone just above the hydrogen-burning shell, where a nuclear reaction lowers the mean molecular weight slightly. Thus, we are able to remove the threat that 3He production in low-mass stars poses to the Big Bang nucleosynthesis of 3He.

  13. Big Data, Big Problems: A Healthcare Perspective.

    PubMed

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  14. Description and correlation of reservoir heterogenity within the Big Injun sandstone, Granny Creek field, West Virginia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vargo, A.; McDowell, R.; Matchen, D.

    1992-01-01

    The Granny Creek field (approximately 6 sq. miles in area), located in Clay and Roane counties, West Virginia, produces oil from the Big Injun sandstone (Lower Mississippian). Analysis of 15 cores, 22 core analyses, and approximately 400 wireline logs (gamma ray and bulk density) show that the Big Injun (approximately 12 to 55 feet thick) can be separated into an upper, coarse-grained sandstone and a lower, fine-grained sandstone. The Big Injun is truncated by an erosional unconformity of Early to Middle Mississippian age which removes the coarse-grain upper unit in the northwest portion of the field. The cores show nodulesmore » and zones (1 inch to 6 feet thick) of calcite and siderite cement. Where the cements occur as zones, porosity and permeability are reduced. Thin shales (1 inch to 1 foot thick) are found in the coarse-grained member of the Big Injun, whereas the bottom of the fine-grained, lower member contains intertongues of dark shale which cause pinchouts in porosity at the bottom of the reservoir. Calcite and siderite cement are recognized on wireline logs as high bulk density zones that form horizontal, inclined, and irregular pods of impermeable sandstone. At a 400 foot well spacing, pods may be confined to a single well or encompass as many as 30 wells creating linear and irregular barriers to flow. These pods increase the length of the fluid flow path and may divide the reservoir into discrete compartments. The combination of sedimentologic and diagenetic features contribute to the heterogeneity observed in the field.« less

  15. 33 CFR 165.506 - Safety Zones; Fireworks Displays in the Fifth Coast Guard District.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Table to § 165.506 reference Datum NAD 1983. Table to § 165.506 No. Date Location Regulated area (a..., approximately 700 yards south of the waterfront at Southport, NC. 12. July 4th Big Foot Slough, Ocracoke, NC, Safety Zone All waters of Big Foot Slough within a 300 yard radius of the fireworks launch site in...

  16. Deep Mixing of 3He: Reconciling Big Bang and Stellar Nucleosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eggleton, P P; Dearborn, D P; Lattanzio, J

    2006-07-26

    Low-mass stars, {approx} 1-2 solar masses, near the Main Sequence are efficient at producing {sup 3}He, which they mix into the convective envelope on the giant branch and should distribute into the Galaxy by way of envelope loss. This process is so efficient that it is difficult to reconcile the low observed cosmic abundance of {sup 3}He with the predictions of both stellar and Big Bang nucleosynthesis. In this paper we find, by modeling a red giant with a fully three-dimensional hydrodynamic code and a full nucleosynthetic network, that mixing arises in the supposedly stable and radiative zone between themore » hydrogen-burning shell and the base of the convective envelope. This mixing is due to Rayleigh-Taylor instability within a zone just above the hydrogen-burning shell, where a nuclear reaction lowers the mean molecular weight slightly. Thus we are able to remove the threat that {sup 3}He production in low-mass stars poses to the Big Bang nucleosynthesis of {sup 3}He.« less

  17. Paleontologic investigations at Big Bone Lick State Park, Kentucky: A preliminary report

    USGS Publications Warehouse

    Schultz, C.B.; Tanner, L.G.; Whitmore, F.C.; Ray, L.L.; Crawford, E.C.

    1963-01-01

    The Big Bone Lick area in Kentucky, the first widely known collecting locality for vertebrate fossils in North America, is being investigated for further faunal and geologic evidence. Mammal bones, ranging in age from Wisconsin (Tazewell?) to Recent, were recovered in 1962 from four different faunal zones in two terrace fills.

  18. Fish assemblages of the upper Little Sioux River basin, Iowa, USA: Relationships with stream size and comparison with historical assemblages

    USGS Publications Warehouse

    Palic, D.; Helland, L.; Pedersen, B.R.; Pribil, J.R.; Grajeda, R.M.; Loan-Wilsey, Anna; Pierce, C.L.

    2007-01-01

    We characterized the fish assemblages in second to fifth order streams of the upper Little Sioux River basin in northwest Iowa, USA and compared our results with historical surveys. The fish assemblage consisted of over twenty species, was dominated numerically by creek chub, sand shiner, central stoneroller and other cyprinids, and was dominated in biomass by common carp. Most of the species and the great majority of all individuals present were at least moderately tolerant to environmental degradation, and biotic integrity at most sites was characterized as fair. Biotic integrity declined with increasing stream size, and degraded habitat in larger streams is a possible cause. No significant changes in species richness or the relative distribution of species' tolerance appear to have occurred since the 1930s.

  19. How Big Is Too Big?

    ERIC Educational Resources Information Center

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  20. BigDog

    NASA Astrophysics Data System (ADS)

    Playter, R.; Buehler, M.; Raibert, M.

    2006-05-01

    BigDog's goal is to be the world's most advanced quadruped robot for outdoor applications. BigDog is aimed at the mission of a mechanical mule - a category with few competitors to date: power autonomous quadrupeds capable of carrying significant payloads, operating outdoors, with static and dynamic mobility, and fully integrated sensing. BigDog is about 1 m tall, 1 m long and 0.3 m wide, and weighs about 90 kg. BigDog has demonstrated walking and trotting gaits, as well as standing up and sitting down. Since its creation in the fall of 2004, BigDog has logged tens of hours of walking, climbing and running time. It has walked up and down 25 & 35 degree inclines and trotted at speeds up to 1.8 m/s. BigDog has walked at 0.7 m/s over loose rock beds and carried over 50 kg of payload. We are currently working to expand BigDog's rough terrain mobility through the creation of robust locomotion strategies and terrain sensing capabilities.

  1. Nursing Needs Big Data and Big Data Needs Nursing.

    PubMed

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  2. The hydrodynamics of the Big Horn Basin: a study of the role of faults

    USGS Publications Warehouse

    Bredehoeft, J.D.; Belitz, K.; Sharp-Hansen, S.

    1992-01-01

    A three-dimensional mathematical model simulates groundwater flow in the Big Horn basin, Wyoming. The hydraulic head at depth over much of the Big Horn basin is near the land surface elevation, a condition usually defined as hydrostatic. This condition indicates a high, regional-scale, vertical conductivity for the sediments in the basin. Our hypothesis to explain the high conductivity is that the faults act as vertical conduits for fluid flow. These same faults can act as either horizontal barriers to flow or nonbarriers, depending upon whether the fault zones are more permeable or less permeable than the adjoining aquifers. -from Authors

  3. BigBWA: approaching the Burrows-Wheeler aligner to Big Data technologies.

    PubMed

    Abuín, José M; Pichel, Juan C; Pena, Tomás F; Amigo, Jorge

    2015-12-15

    BigBWA is a new tool that uses the Big Data technology Hadoop to boost the performance of the Burrows-Wheeler aligner (BWA). Important reductions in the execution times were observed when using this tool. In addition, BigBWA is fault tolerant and it does not require any modification of the original BWA source code. BigBWA is available at the project GitHub repository: https://github.com/citiususc/BigBWA. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Advanced microwave soil moisture studies. [Big Sioux River Basin, Iowa

    NASA Technical Reports Server (NTRS)

    Dalsted, K. J.; Harlan, J. C.

    1983-01-01

    Comparisons of low level L-band brightness temperature (TB) and thermal infrared (TIR) data as well as the following data sets: soil map and land cover data; direct soil moisture measurement; and a computer generated contour map were statistically evaluated using regression analysis and linear discriminant analysis. Regression analysis of footprint data shows that statistical groupings of ground variables (soil features and land cover) hold promise for qualitative assessment of soil moisture and for reducing variance within the sampling space. Dry conditions appear to be more conductive to producing meaningful statistics than wet conditions. Regression analysis using field averaged TB and TIR data did not approach the higher sq R values obtained using within-field variations. The linear discriminant analysis indicates some capacity to distinguish categories with the results being somewhat better on a field basis than a footprint basis.

  5. How big is the Ocean Dead Zone off the Coast of California?

    NASA Astrophysics Data System (ADS)

    Hofmann, A. F.; Peltzer, E. T.; Walz, P. M.; Brewer, P. G.

    2010-12-01

    The term “Ocean Dead Zone”, generally referring to a zone that is devoid of aerobic marine life of value to humans, is now widely used in the press and scientific literature but it appears to be not universally defined. The global assessment and monitoring of ocean dead zones, however, is of high public concern due to the considerable economic value associated with impacted fisheries and with questions over the growth of these zones forced by climate change. We report on the existence of a zone at ~850m depth off Santa Monica, California where dissolved oxygen (DO) levels are 1 μmol/kg; an order of magnitude below any existing definition of an “Ocean Dead Zone”. ROV dives show the region to be visually devoid of all aerobic marine life. But how large is this dead zone, and how may its boundaries be defined? Without an accepted definition we cannot report this nor can we compare it to other dead zones reported elsewhere in the world. “Dead zones” are now assessed solely by DO levels. A multitude of values in different units are used (Fig 1), which are clearly not universally applicable. This seriously hampers an integrated global monitoring and management effort and frustrates the development of valid connections with climate change and assessment of the consequences. Furthermore, input of anthropogenic CO2 can also stress marine life. Recent work supported by classical data suggests that higher pCO2 influences the thermodynamic energy efficiency of oxic respiration (CH2O + O2 -> CO2 + H2O). The ratio pO2/pCO2, called the respiration index (RI), emerges as the critical variable, combining the impacts of warming on DO and rising CO2 levels within a single, well defined quantity. We advocate that future monitoring efforts report pO2 and pCO2 concurrently, thus making it possible to classify, monitor and manage “dead zones” within a standard reference system that may include, as with e.g, hurricanes, differing categories of intensity. Fig.1. A DO

  6. Benchmarking Big Data Systems and the BigData Top100 List.

    PubMed

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  7. Big data, big knowledge: big data for personalized healthcare.

    PubMed

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  8. Using Reactive Transport Modeling to Understand Formation of the Stimson Sedimentary Unit and Altered Fracture Zones at Gale Crater, Mars

    NASA Technical Reports Server (NTRS)

    Hausrath, E. M.; Ming, D. W.; Peretyazhko, T.; Rampe, E. B.

    2017-01-01

    Water flowing through sediments at Gale Crater, Mars created environments that were likely habitable, and sampled basin-wide hydrological systems. However, many questions remain about these environments and the fluids that generated them. Measurements taken by the Mars Science Laboratory Curiosity of multiple fracture zones can help constrain the environments that formed them because they can be compared to nearby associated parent material (Figure 1). For example, measurements of altered fracture zones from the target Greenhorn in the Stimson sandstone can be compared to parent material measured in the nearby Big Sky target, allowing constraints to be placed on the alteration conditions that formed the Greenhorn target from the Big Sky target. Similarly, CheMin measurements of the powdered < 150 micron fraction from the drillhole at Big Sky and sample from the Rocknest eolian deposit indicate that the mineralogies are strikingly similar. The main differences are the presence of olivine in the Rocknest eolian deposit, which is absent in the Big Sky target, and the presence of far more abundant Fe oxides in the Big Sky target. Quantifying the changes between the Big Sky target and the Rocknest eolian deposit can therefore help us understand the diagenetic changes that occurred forming the Stimson sedimentary unit. In order to interpret these aqueous changes, we performed reactive transport modeling of 1) the formation of the Big Sky target from a Rocknest eolian deposit-like parent material, and 2) the formation of the Greenhorn target from the Big Sky target. This work allows us to test the relationships between the targets and the characteristics of the aqueous conditions that formed the Greenhorn target from the Big Sky target, and the Big Sky target from a Rocknest eolian deposit-like parent material.

  9. General Crook and Counterinsurgency Warfare

    DTIC Science & Technology

    2001-06-01

    the Yellowstone River was declared as “unceded Indian Territory” where the Sioux and Cheyenne could reside, but the white settlers were excluded.3...the Yellowstone and Tongue Rivers. The designated column commanders Crook, Terry, and Gibbon were to move their columns towards the center of the area...brutal winter months on the northern plains. Crook reorganized his command at Fort Fetterman. First he renamed his command the Big Horn and Yellowstone

  10. Big data uncertainties.

    PubMed

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  11. Expansion of the Spinocerebellar Ataxia Type 10 (SCA10) Repeat in a Patient with Sioux Native American Ancestry

    PubMed Central

    Liu, Jilin; McFarland, Karen N.; Landrian, Ivette; Hutter, Diane; Teive, Hélio A. G.; Rasmussen, Astrid; Mulligan, Connie J.; Ashizawa, Tetsuo

    2013-01-01

    Spinocerebellar ataxia type 10 (SCA10), an autosomal dominant cerebellar ataxia, is caused by the expansion of the non-coding ATTCT pentanucleotide repeat in the ATAXIN 10 gene. To date, all cases of SCA10 are restricted to patients with ancestral ties to Latin American countries. Here, we report on a SCA10 patient with Sioux Native American ancestry and no reported Hispanic or Latino heritage. Neurological exam findings revealed impaired gait with mild, age-consistent cerebellar atrophy and no evidence of epileptic seizures. The age at onset for this patient, at 83 years of age, is the latest documented for SCA10 patients and is suggestive of a reduced penetrance allele in his family. Southern blot analysis showed an SCA10 expanded allele of 1400 repeats. Established SNPs surrounding the SCA10 locus showed a disease haplotype consistent with the previously described “SCA10 haplotype”. This case suggests that the SCA10 expansion represents an early mutation event that possibly occurred during the initial peopling of the Americas. PMID:24278426

  12. Five Big Ideas

    ERIC Educational Resources Information Center

    Morgan, Debbie

    2012-01-01

    Designing quality continuing professional development (CPD) for those teaching mathematics in primary schools is a challenge. If the CPD is to be built on the scaffold of five big ideas in mathematics, what might be these five big ideas? Might it just be a case of, if you tell me your five big ideas, then I'll tell you mine? Here, there is…

  13. Hydrodynamic simulations of physical aquatic habitat availability for Pallid Sturgeon in the Lower Missouri River, at Yankton, South Dakota, Kenslers Bend, Nebraska, Little Sioux, Iowa, and Miami, Missouri, 2006-07

    USGS Publications Warehouse

    Jacobson, Robert B.; Johnson, Harold E.; Dietsch, Benjamin J.

    2009-01-01

    -average basis, annual topographic change contributed little to habitat area variation. Net erosion occurred at Yankton (the upstream reach) and because erosion was distributed uniformly, there was little affect on many habitat metrics. Topographic change was spatially nonuniform at Little Sioux and Kenslers Bend reaches. Shallow water habitat units and some reach-scale patch statistics (edge density, patch density, and Simpson’s Diversity Index) were affected by these changes. Erosion dominated at the downstream reach but habitat metrics did not vary substantially from 2006 to 2007.Among habitat metrics that were explored, zones of convergent flow were identified as areas that most closely correspond to spawning habitats of other sturgeon species, as identified in the scientific literature, and that are consistent with sparse data on pallid sturgeon spawning locations in the Lower Missouri River. Areas of convergent zone habitat varied little with discharges that would be associated with spring pulsed flows, and relations with discharge changed negligibly between 2006 and 2007.Other habitat measures show how physical habitat varies with discharge and among the four reaches. Wake habitats defined by velocity gradients seem to correspond with migration pathways of adult pallid sturgeon. Habitats with low Froude-number correspond to low energy areas that may accumulate passively transporting particles, organic matter, and larval fish. Among the modeled reaches, Yankton had substantially longer water residence time for equivalent flow exceedances than the other three modeled reaches. Longer residence times result from greater flow resistance in the relatively wide, shallow channel and may be associated with longer residence times of passively transported particulate materials.

  14. Current status of coastal zone issues and management in China: a review.

    PubMed

    Cao, Wenzhi; Wong, Ming H

    2007-10-01

    This paper identifies and examines social-economic and environmental issues recently emerged in China's coastal zone. Evaluation of management scheme and progress in perspectives of coordinated legislation, institutional arrangement, public participation, capacity building, and scientific research (mainly coastal planning and functional zoning) in China's coastal zone are made. The Chinese government has made a significant effort in developing legislation for the coastal zone. Jurisdictional and zoning boundaries, and allocating use rights for coastal and marine resources have been established. State Oceanic Administration is the leading agency responsible for China's ocean policymaking and overall management of ocean and coastal affairs. A demonstrated project for integrated coastal management in Xiamen has been implemented, and is characterized as "decentralization" approach in decision-making process. In view of the above, comprehensive coastal management in China is a big challenge, facing with many difficulties. Finally, recommendations are raised for tackling these issues for China's coastal zone management.

  15. Native Perennial Forb Variation Between Mountain Big Sagebrush and Wyoming Big Sagebrush Plant Communities

    NASA Astrophysics Data System (ADS)

    Davies, Kirk W.; Bates, Jon D.

    2010-09-01

    Big sagebrush ( Artemisia tridentata Nutt.) occupies large portions of the western United States and provides valuable wildlife habitat. However, information is lacking quantifying differences in native perennial forb characteristics between mountain big sagebrush [ A. tridentata spp. vaseyana (Rydb.) Beetle] and Wyoming big sagebrush [ A. tridentata spp. wyomingensis (Beetle & A. Young) S.L. Welsh] plant communities. This information is critical to accurately evaluate the quality of habitat and forage that these communities can produce because many wildlife species consume large quantities of native perennial forbs and depend on them for hiding cover. To compare native perennial forb characteristics on sites dominated by these two subspecies of big sagebrush, we sampled 106 intact big sagebrush plant communities. Mountain big sagebrush plant communities produced almost 4.5-fold more native perennial forb biomass and had greater native perennial forb species richness and diversity compared to Wyoming big sagebrush plant communities ( P < 0.001). Nonmetric multidimensional scaling (NMS) and the multiple-response permutation procedure (MRPP) demonstrated that native perennial forb composition varied between these plant communities ( P < 0.001). Native perennial forb composition was more similar within plant communities grouped by big sagebrush subspecies than expected by chance ( A = 0.112) and composition varied between community groups ( P < 0.001). Indicator analysis did not identify any perennial forbs that were completely exclusive and faithful, but did identify several perennial forbs that were relatively good indicators of either mountain big sagebrush or Wyoming big sagebrush plant communities. Our results suggest that management plans and habitat guidelines should recognize differences in native perennial forb characteristics between mountain and Wyoming big sagebrush plant communities.

  16. Big Data and Neuroimaging.

    PubMed

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  17. Cryptography for Big Data Security

    DTIC Science & Technology

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  18. Data: Big and Small.

    PubMed

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  19. Big Data in industry

    NASA Astrophysics Data System (ADS)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  20. 75 FR 27507 - Safety Zone; Delaware River, Big Timber Creek, Westville, NJ

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-17

    ... in June with a rain date of the first Saturday in July. This Safety Zone is necessary to provide for... p.m. on the last Saturday in June with a rain date of the first Saturday in July. Dated: April 29...

  1. The big data-big model (BDBM) challenges in ecological research

    NASA Astrophysics Data System (ADS)

    Luo, Y.

    2015-12-01

    The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple

  2. Big Machines and Big Science: 80 Years of Accelerators at Stanford

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loew, Gregory

    2008-12-16

    Longtime SLAC physicist Greg Loew will present a trip through SLAC's origins, highlighting its scientific achievements, and provide a glimpse of the lab's future in 'Big Machines and Big Science: 80 Years of Accelerators at Stanford.'

  3. Health-hazard evaluation report HETA 87-097-1820, Devil's Lake Sioux Manufacturing Corporation, Fort Totten, Morth Dakota

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichting, A.T.

    1987-07-01

    In response to a request from employees at the Devil's Lake Sioux Manufacturing Corporation located in Fort Totten, North Dakota, a study was made to determine a possible health hazard from n-hexane exposure and other organic solvents used in the manufacture of Kevlar combat helmets at the facility. Ten women working as edgers had been treated for carpal tunnel syndrome. Air samples revealed concentrations of xylene ranging from not detectable to 6.44 mg/m/sup 3/, toluene ranging from 0.46 to 33.2 mg/m/sup 3/, hexane ranging from 0.27 to 83.3 mg/m/sup 3/, and methyl ethyl ketone ranging from 20.0 to 310 mg/m/supmore » 3/. These levels were highest in the areas of the facility where edgers and glue spray booth operators worked. The author concludes that solvent exposures in these locations posed a potential health hazard. The carpal tunnel syndrome complaints appear to be related to ergonomic factors. The author recommends the substitution of less-toxic materials in the workplace; applying proper engineering controls; improving local-exhaust ventilation; providing protective clothing for workers; sampling of employees for organic solvent exposure at regular intervals; and informing the employees of all hazards inherent in their work.« less

  4. Big data need big theory too

    PubMed Central

    Dougherty, Edward R.; Highfield, Roger R.

    2016-01-01

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their ‘depth’ and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote ‘blind’ big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’. PMID:27698035

  5. Big data need big theory too.

    PubMed

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  6. Increased plasma levels of big-endothelin-2 and big-endothelin-3 in patients with end-stage renal disease.

    PubMed

    Miyauchi, Yumi; Sakai, Satoshi; Maeda, Seiji; Shimojo, Nobutake; Watanabe, Shigeyuki; Honma, Satoshi; Kuga, Keisuke; Aonuma, Kazutaka; Miyauchi, Takashi

    2012-10-15

    Big endothelins (pro-endothelin; inactive-precursor) are converted to biologically active endothelins (ETs). Mammals and humans produce three ET family members: ET-1, ET-2 and ET-3, from three different genes. Although ET-1 is produced by vascular endothelial cells, these cells do not produce ET-3, which is produced by neuronal cells and organs such as the thyroid, salivary gland and the kidney. In patients with end-stage renal disease, abnormal vascular endothelial cell function and elevated plasma ET-1 and big ET-1 levels have been reported. It is unknown whether big ET-2 and big ET-3 plasma levels are altered in these patients. The purpose of the present study was to determine whether endogenous ET-1, ET-2, and ET-3 systems including big ETs are altered in patients with end-stage renal disease. We measured plasma levels of ET-1, ET-3 and big ET-1, big ET-2, and big ET-3 in patients on chronic hemodialysis (n=23) and age-matched healthy subjects (n=17). In patients on hemodialysis, plasma levels (measured just before hemodialysis) of both ET-1 and ET-3 and big ET-1, big ET-2, and big ET-3 were markedly elevated, and the increase was higher for big ETs (Big ET-1, 4-fold; big ET-2, 6-fold; big ET-3: 5-fold) than for ETs (ET-1, 1.7-fold; ET-3, 2-fold). In hemodialysis patients, plasma levels of the inactive precursors big ET-1, big ET-2, and big ET-3 levels are markedly increased, yet there is only a moderate increase in plasma levels of the active products, ET-1 and ET-3. This suggests that the activity of endothelin converting enzyme contributing to circulating levels of ET-1 and ET-3 may be decreased in patients on chronic hemodialysis. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Big Data and medicine: a big deal?

    PubMed

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  8. Untapped Potential: Fulfilling the Promise of Big Brothers Big Sisters and the Bigs and Littles They Represent

    ERIC Educational Resources Information Center

    Bridgeland, John M.; Moore, Laura A.

    2010-01-01

    American children represent a great untapped potential in our country. For many young people, choices are limited and the goal of a productive adulthood is a remote one. This report paints a picture of who these children are, shares their insights and reflections about the barriers they face, and offers ways forward for Big Brothers Big Sisters as…

  9. Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.

    PubMed

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-12-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.

  10. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    ERIC Educational Resources Information Center

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  11. Implementing Big History.

    ERIC Educational Resources Information Center

    Welter, Mark

    2000-01-01

    Contends that world history should be taught as "Big History," a view that includes all space and time beginning with the Big Bang. Discusses five "Cardinal Questions" that serve as a course structure and address the following concepts: perspectives, diversity, change and continuity, interdependence, and causes. (CMK)

  12. Big data for health.

    PubMed

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  13. Big Data: Implications for Health System Pharmacy

    PubMed Central

    Stokes, Laura B.; Rogers, Joseph W.; Hertig, John B.; Weber, Robert J.

    2016-01-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services. PMID:27559194

  14. Big Data: Implications for Health System Pharmacy.

    PubMed

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  15. BigWig and BigBed: enabling browsing of large distributed datasets.

    PubMed

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  16. Big Challenges and Big Opportunities: The Power of "Big Ideas" to Change Curriculum and the Culture of Teacher Planning

    ERIC Educational Resources Information Center

    Hurst, Chris

    2014-01-01

    Mathematical knowledge of pre-service teachers is currently "under the microscope" and the subject of research. This paper proposes a different approach to teacher content knowledge based on the "big ideas" of mathematics and the connections that exist within and between them. It is suggested that these "big ideas"…

  17. Countering misinformation concerning big sagebrush

    Treesearch

    Bruce L Welch; Craig Criddle

    2003-01-01

    This paper examines the scientific merits of eight axioms of range or vegetative management pertaining to big sagebrush. These axioms are: (1) Wyoming big sagebrush (Artemisia tridentata ssp. wyomingensis) does not naturally exceed 10 percent canopy cover and mountain big sagebrush (A. t. ssp. vaseyana) does not naturally exceed 20 percent canopy...

  18. BigNeuron dataset V.0.0

    DOE Data Explorer

    Ramanathan, Arvind

    2016-01-01

    The cleaned bench testing reconstructions for the gold166 datasets have been put online at github https://github.com/BigNeuron/Events-and-News/wiki/BigNeuron-Events-and-News https://github.com/BigNeuron/Data/releases/tag/gold166_bt_v1.0 The respective image datasets were released a while ago from other sites (major pointer is available at github as well https://github.com/BigNeuron/Data/releases/tag/Gold166_v1 but since the files were big, the actual downloading was distributed at 3 continents separately)

  19. Big data - a 21st century science Maginot Line? No-boundary thinking: shifting from the big data paradigm.

    PubMed

    Huang, Xiuzhen; Jennings, Steven F; Bruce, Barry; Buchan, Alison; Cai, Liming; Chen, Pengyin; Cramer, Carole L; Guan, Weihua; Hilgert, Uwe Kk; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Donald F; Nanduri, Bindu; Perkins, Andy; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Walker, Karl; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhang, Yu; Zhao, Zhongming; Moore, Jason H

    2015-01-01

    Whether your interests lie in scientific arenas, the corporate world, or in government, you have certainly heard the praises of big data: Big data will give you new insights, allow you to become more efficient, and/or will solve your problems. While big data has had some outstanding successes, many are now beginning to see that it is not the Silver Bullet that it has been touted to be. Here our main concern is the overall impact of big data; the current manifestation of big data is constructing a Maginot Line in science in the 21st century. Big data is not "lots of data" as a phenomena anymore; The big data paradigm is putting the spirit of the Maginot Line into lots of data. Big data overall is disconnecting researchers and science challenges. We propose No-Boundary Thinking (NBT), applying no-boundary thinking in problem defining to address science challenges.

  20. Challenges of Big Data Analysis.

    PubMed

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  1. Challenges of Big Data Analysis

    PubMed Central

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-01-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions. PMID:25419469

  2. Activities and Ecological Role of Adult Aquatic Insects in the Riparian Zone of Streams

    Treesearch

    John K. Jackson; Vincent H. Resh

    1989-01-01

    Most adult aquatic insects that emerge from streams live briefly in the nearby riparian zone. Adult activities, such as mating, dispersal, and feeding, influence their distribution in the terrestrial habitat. A study at Big Sulphur Creek, California, has shown that both numbers and biomass of adult aquatic insects are greatest in the near-stream vegetation; however,...

  3. Big Data and Chemical Education

    ERIC Educational Resources Information Center

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  4. Big data in fashion industry

    NASA Astrophysics Data System (ADS)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  5. The Big6 Collection: The Best of the Big6 Newsletter.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.; Berkowitz, Robert E.

    The Big6 is a complete approach to implementing meaningful learning and teaching of information and technology skills, essential for 21st century living. Including in-depth articles, practical tips, and explanations, this book offers a varied range of material about students and teachers, the Big6, and curriculum. The book is divided into 10 main…

  6. Big Data Bioinformatics

    PubMed Central

    GREENE, CASEY S.; TAN, JIE; UNG, MATTHEW; MOORE, JASON H.; CHENG, CHAO

    2017-01-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the “big data” era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both “machine learning” algorithms as well as “unsupervised” and “supervised” examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. PMID:27908398

  7. Big Data Bioinformatics

    PubMed Central

    GREENE, CASEY S.; TAN, JIE; UNG, MATTHEW; MOORE, JASON H.; CHENG, CHAO

    2017-01-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the “big data” era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both “machine learning” algorithms as well as “unsupervised” and “supervised” examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. PMID:24799088

  8. Big data bioinformatics.

    PubMed

    Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao

    2014-12-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. © 2014 Wiley Periodicals, Inc.

  9. The Black Mountain tectonic zone--a reactivated northeast-trending crustal shear zone in the Yukon-Tanana Upland of east-central Alaska: Chapter D in Recent U.S. Geological Survey studies in the Tintina Gold Province, Alaska, United States, and Yukon, Canada--results of a 5-year project

    USGS Publications Warehouse

    O'Neill, J. Michael; Day, Warren C.; Alienikoff, John N.; Saltus, Richard W.; Gough, Larry P.; Day, Warren C.

    2007-01-01

    The Black Mountain tectonic zone in the YukonTanana terrane of east-central Alaska is a belt of diverse northeast-trending geologic features that can been traced across Black Mountain in the southeast corner of the Big Delta 1°×3° degree quadrangle. Geologic mapping in the larger scale B1 quadrangle of the Big Delta quadrangle, in which Black Mountain is the principal physiographic feature, has revealed a continuous zone of normal and left-lateral strikeslip high-angle faults and shear zones, some of which have late Tertiary to Quaternary displacement histories. The tectonic zone includes complexly intruded wall rocks and intermingled apophyses of the contiguous mid-Cretaceous Goodpaster and Mount Harper granodioritic plutons, mafic to intermediate composite dike swarms, precious metal mineralization, early Tertiary volcanic activity and Quaternary fault scarps. These structures define a zone as much as 6 to 13 kilometers (km) wide and more than 40 km long that can be traced diagonally across the B1 quadrangle into the adjacent Eagle 1°×3° quadrangle to the east. Recurrent activity along the tectonic zone, from at least mid-Cretaceous to Quaternary, suggests the presence of a buried, fundamental tectonic feature beneath the zone that has influenced the tectonic development of this part of the Yukon-Tanana terrane. The tectonic zone, centered on Black Mountain, lies directly above a profound northeast-trending aeromagnetic anomaly between the Denali and Tintina fault systems. The anomaly separates moderate to strongly magnetic terrane on the northwest from a huge, weakly magnetic terrane on the southeast. The tectonic zone is parallel to the similarly oriented left-lateral, strike-slip Shaw Creek fault zone 85 km to the west.

  10. Changing the personality of a face: Perceived Big Two and Big Five personality factors modeled in real photographs.

    PubMed

    Walker, Mirella; Vetter, Thomas

    2016-04-01

    General, spontaneous evaluations of strangers based on their faces have been shown to reflect judgments of these persons' intention and ability to harm. These evaluations can be mapped onto a 2D space defined by the dimensions trustworthiness (intention) and dominance (ability). Here we go beyond general evaluations and focus on more specific personality judgments derived from the Big Two and Big Five personality concepts. In particular, we investigate whether Big Two/Big Five personality judgments can be mapped onto the 2D space defined by the dimensions trustworthiness and dominance. Results indicate that judgments of the Big Two personality dimensions almost perfectly map onto the 2D space. In contrast, at least 3 of the Big Five dimensions (i.e., neuroticism, extraversion, and conscientiousness) go beyond the 2D space, indicating that additional dimensions are necessary to describe more specific face-based personality judgments accurately. Building on this evidence, we model the Big Two/Big Five personality dimensions in real facial photographs. Results from 2 validation studies show that the Big Two/Big Five are perceived reliably across different samples of faces and participants. Moreover, results reveal that participants differentiate reliably between the different Big Two/Big Five dimensions. Importantly, this high level of agreement and differentiation in personality judgments from faces likely creates a subjective reality which may have serious consequences for those being perceived-notably, these consequences ensue because the subjective reality is socially shared, irrespective of the judgments' validity. The methodological approach introduced here might prove useful in various psychological disciplines. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Treesearch

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  12. The Big Bang Theory

    ScienceCinema

    Lincoln, Don

    2018-01-16

    The Big Bang is the name of the most respected theory of the creation of the universe. Basically, the theory says that the universe was once smaller and denser and has been expending for eons. One common misconception is that the Big Bang theory says something about the instant that set the expansion into motion, however this isn’t true. In this video, Fermilab’s Dr. Don Lincoln tells about the Big Bang theory and sketches some speculative ideas about what caused the universe to come into existence.

  13. The Big Bang Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lincoln, Don

    The Big Bang is the name of the most respected theory of the creation of the universe. Basically, the theory says that the universe was once smaller and denser and has been expending for eons. One common misconception is that the Big Bang theory says something about the instant that set the expansion into motion, however this isn’t true. In this video, Fermilab’s Dr. Don Lincoln tells about the Big Bang theory and sketches some speculative ideas about what caused the universe to come into existence.

  14. Seeding considerations in restoring big sagebrush habitat

    Treesearch

    Scott M. Lambert

    2005-01-01

    This paper describes methods of managing or seeding to restore big sagebrush communities for wildlife habitat. The focus is on three big sagebrush subspecies, Wyoming big sagebrush (Artemisia tridentata ssp. wyomingensis), basin big sagebrush (Artemisia tridentata ssp. tridentata), and mountain...

  15. ARTIST CONCEPT - BIG JOE

    NASA Image and Video Library

    1963-09-01

    S63-19317 (October 1963) --- Pen and ink views of comparative arrangements of several capsules including the existing "Big Joe" design, the compromise "Big Joe" design, and the "Little Joe". All capsule designs are labeled and include dimensions. Photo credit: NASA

  16. Big Society, Big Deal?

    ERIC Educational Resources Information Center

    Thomson, Alastair

    2011-01-01

    Political leaders like to put forward guiding ideas or themes which pull their individual decisions into a broader narrative. For John Major it was Back to Basics, for Tony Blair it was the Third Way and for David Cameron it is the Big Society. While Mr. Blair relied on Lord Giddens to add intellectual weight to his idea, Mr. Cameron's legacy idea…

  17. Big Data Analytics in Medicine and Healthcare.

    PubMed

    Ristevski, Blagoj; Chen, Ming

    2018-05-10

    This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.

  18. The Big Bang Singularity

    NASA Astrophysics Data System (ADS)

    Ling, Eric

    The big bang theory is a model of the universe which makes the striking prediction that the universe began a finite amount of time in the past at the so called "Big Bang singularity." We explore the physical and mathematical justification of this surprising result. After laying down the framework of the universe as a spacetime manifold, we combine physical observations with global symmetrical assumptions to deduce the FRW cosmological models which predict a big bang singularity. Next we prove a couple theorems due to Stephen Hawking which show that the big bang singularity exists even if one removes the global symmetrical assumptions. Lastly, we investigate the conditions one needs to impose on a spacetime if one wishes to avoid a singularity. The ideas and concepts used here to study spacetimes are similar to those used to study Riemannian manifolds, therefore we compare and contrast the two geometries throughout.

  19. Medical big data: promise and challenges.

    PubMed

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  20. Medical big data: promise and challenges

    PubMed Central

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-01-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology. PMID:28392994

  1. Measuring the Promise of Big Data Syllabi

    ERIC Educational Resources Information Center

    Friedman, Alon

    2018-01-01

    Growing interest in Big Data is leading industries, academics and governments to accelerate Big Data research. However, how teachers should teach Big Data has not been fully examined. This article suggests criteria for redesigning Big Data syllabi in public and private degree-awarding higher education establishments. The author conducted a survey…

  2. The big bang

    NASA Astrophysics Data System (ADS)

    Silk, Joseph

    Our universe was born billions of years ago in a hot, violent explosion of elementary particles and radiation - the big bang. What do we know about this ultimate moment of creation, and how do we know it? Drawing upon the latest theories and technology, this new edition of The big bang, is a sweeping, lucid account of the event that set the universe in motion. Joseph Silk begins his story with the first microseconds of the big bang, on through the evolution of stars, galaxies, clusters of galaxies, quasars, and into the distant future of our universe. He also explores the fascinating evidence for the big bang model and recounts the history of cosmological speculation. Revised and updated, this new edition features all the most recent astronomical advances, including: Photos and measurements from the Hubble Space Telescope, Cosmic Background Explorer Satellite (COBE), and Infrared Space Observatory; the latest estimates of the age of the universe; new ideas in string and superstring theory; recent experiments on neutrino detection; new theories about the presence of dark matter in galaxies; new developments in the theory of the formation and evolution of galaxies; the latest ideas about black holes, worm holes, quantum foam, and multiple universes.

  3. Big-BOE: Fusing Spanish Official Gazette with Big Data Technology.

    PubMed

    Basanta-Val, Pablo; Sánchez-Fernández, Luis

    2018-06-01

    The proliferation of new data sources, stemmed from the adoption of open-data schemes, in combination with an increasing computing capacity causes the inception of new type of analytics that process Internet of things with low-cost engines to speed up data processing using parallel computing. In this context, the article presents an initiative, called BIG-Boletín Oficial del Estado (BOE), designed to process the Spanish official government gazette (BOE) with state-of-the-art processing engines, to reduce computation time and to offer additional speed up for big data analysts. The goal of including a big data infrastructure is to be able to process different BOE documents in parallel with specific analytics, to search for several issues in different documents. The application infrastructure processing engine is described from an architectural perspective and from performance, showing evidence on how this type of infrastructure improves the performance of different types of simple analytics as several machines cooperate.

  4. Big Data's Role in Precision Public Health.

    PubMed

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  5. Antigravity and the big crunch/big bang transition

    NASA Astrophysics Data System (ADS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-08-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  6. Restoring Wyoming big sagebrush

    Treesearch

    Cindy R. Lysne

    2005-01-01

    The widespread occurrence of big sagebrush can be attributed to many adaptive features. Big sagebrush plays an essential role in its communities by providing wildlife habitat, modifying local environmental conditions, and facilitating the reestablishment of native herbs. Currently, however, many sagebrush steppe communities are highly fragmented. As a result, restoring...

  7. Exploiting big data for critical care research.

    PubMed

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  8. Structure and geomorphology of the "big bend" in the Hosgri-San Gregorio fault system, offshore of Big Sur, central California

    NASA Astrophysics Data System (ADS)

    Johnson, S. Y.; Watt, J. T.; Hartwell, S. R.; Kluesner, J. W.; Dartnell, P.

    2015-12-01

    The right-lateral Hosgri-San Gregorio fault system extends mainly offshore for about 400 km along the central California coast and is a major structure in the distributed transform margin of western North America. We recently mapped a poorly known 64-km-long section of the Hosgri fault offshore Big Sur between Ragged Point and Pfieffer Point using high-resolution bathymetry, tightly spaced single-channel seismic-reflection and coincident marine magnetic profiles, and reprocessed industry multichannel seismic-reflection data. Regionally, this part of the Hosgri-San Gregorio fault system has a markedly more westerly trend (by 10° to 15°) than parts farther north and south, and thus represents a transpressional "big bend." Through this "big bend," the fault zone is never more than 6 km from the shoreline and is a primary control on the dramatic coastal geomorphology that includes high coastal cliffs, a narrow (2- to 8-km-wide) continental shelf, a sharp shelfbreak, and a steep (as much as 17°) continental slope incised by submarine canyons and gullies. Depth-converted industry seismic data suggest that the Hosgri fault dips steeply to the northeast and forms the eastern boundary of the asymmetric (deeper to the east) Sur Basin. Structural relief on Franciscan basement across the Hosgri fault is about 2.8 km. Locally, we recognize five discrete "sections" of the Hosgri fault based on fault trend, shallow structure (e.g., disruption of young sediments), seafloor geomorphology, and coincidence with high-amplitude magnetic anomalies sourced by ultramafic rocks in the Franciscan Complex. From south to north, section lengths and trends are as follows: (1) 17 km, 312°; (2) 10 km, 322°; (3)13 km, 317°; (4) 3 km, 329°; (5) 21 km, 318°. Through these sections, the Hosgri surface trace includes several right steps that vary from a few hundred meters to about 1 km wide, none wide enough to provide a barrier to continuous earthquake rupture.

  9. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig) proteins.

    PubMed

    Raman, Rajeev; Rajanikanth, V; Palaniappan, Raghavan U M; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P; Sharma, Yogendra; Chang, Yung-Fu

    2010-12-29

    Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th) (Lig A9) and 10(th) repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  10. Metal atom dynamics in superbulky metallocenes: a comparison of (Cp(BIG))2Sn and (Cp(BIG))2Eu.

    PubMed

    Harder, Sjoerd; Naglav, Dominik; Schwerdtfeger, Peter; Nowik, Israel; Herber, Rolfe H

    2014-02-17

    Cp(BIG)2Sn (Cp(BIG) = (4-n-Bu-C6H4)5cyclopentadienyl), prepared by reaction of 2 equiv of Cp(BIG)Na with SnCl2, crystallized isomorphous to other known metallocenes with this ligand (Ca, Sr, Ba, Sm, Eu, Yb). Similarly, it shows perfect linearity, C-H···C(π) bonding between the Cp(BIG) rings and out-of-plane bending of the aryl substituents toward the metal. Whereas all other Cp(BIG)2M complexes show large disorder in the metal position, the Sn atom in Cp(BIG)2Sn is perfectly ordered. In contrast, (119)Sn and (151)Eu Mößbauer investigations on the corresponding Cp(BIG)2M metallocenes show that Sn(II) is more dynamic and loosely bound than Eu(II). The large displacement factors in the group 2 and especially in the lanthanide(II) metallocenes Cp(BIG)2M can be explained by static metal disorder in a plane parallel to the Cp(BIG) rings. Despite parallel Cp(BIG) rings, these metallocenes have a nonlinear Cpcenter-M-Cpcenter geometry. This is explained by an ionic model in which metal atoms are polarized by the negatively charged Cp rings. The extent of nonlinearity is in line with trends found in M(2+) ion polarizabilities. The range of known calculated dipole polarizabilities at the Douglas-Kroll CCSD(T) level was extended with values (atomic units) for Sn(2+) 15.35, Sm(2+)(4f(6) (7)F) 9.82, Eu(2+)(4f(7) (8)S) 8.99, and Yb(2+)(4f(14) (1)S) 6.55. This polarizability model cannot be applied to predominantly covalently bound Cp(BIG)2Sn, which shows a perfectly ordered structure. The bent geometry of Cp*2Sn should therefore not be explained by metal polarizability but is due to van der Waals Cp*···Cp* attraction and (to some extent) to a small p-character component in the Sn lone pair.

  11. Big Joe Capsule Assembly Activities

    NASA Image and Video Library

    1959-08-01

    Big Joe Capsule Assembly Activities in 1959 at NASA Glenn Research Center (formerly NASA Lewis). Big Joe was an Atlas missile that successfully launched a boilerplate model of the Mercury capsule on September 9, 1959.

  12. Urgent Call for Nursing Big Data.

    PubMed

    Delaney, Connie W

    2016-01-01

    The purpose of this panel is to expand internationally a National Action Plan for sharable and comparable nursing data for quality improvement and big data science. There is an urgent need to assure that nursing has sharable and comparable data for quality improvement and big data science. A national collaborative - Nursing Knowledge and Big Data Science includes multi-stakeholder groups focused on a National Action Plan toward implementing and using sharable and comparable nursing big data. Panelists will share accomplishments and future plans with an eye toward international collaboration. This presentation is suitable for any audience attending the NI2016 conference.

  13. bigSCale: an analytical framework for big-scale single-cell data.

    PubMed

    Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger

    2018-06-01

    Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor

  14. [Big data in medicine and healthcare].

    PubMed

    Rüping, Stefan

    2015-08-01

    Healthcare is one of the business fields with the highest Big Data potential. According to the prevailing definition, Big Data refers to the fact that data today is often too large and heterogeneous and changes too quickly to be stored, processed, and transformed into value by previous technologies. The technological trends drive Big Data: business processes are more and more executed electronically, consumers produce more and more data themselves - e.g. in social networks - and finally ever increasing digitalization. Currently, several new trends towards new data sources and innovative data analysis appear in medicine and healthcare. From the research perspective, omics-research is one clear Big Data topic. In practice, the electronic health records, free open data and the "quantified self" offer new perspectives for data analytics. Regarding analytics, significant advances have been made in the information extraction from text data, which unlocks a lot of data from clinical documentation for analytics purposes. At the same time, medicine and healthcare is lagging behind in the adoption of Big Data approaches. This can be traced to particular problems regarding data complexity and organizational, legal, and ethical challenges. The growing uptake of Big Data in general and first best-practice examples in medicine and healthcare in particular, indicate that innovative solutions will be coming. This paper gives an overview of the potentials of Big Data in medicine and healthcare.

  15. Helioseismic Constraints on the Gradient of Angular Velocity at the Base of the Solar Convection Zone

    NASA Technical Reports Server (NTRS)

    Kosovichev, A. G.

    1996-01-01

    The layer of transition from the nearly rigid rotation of the radiative interior to the latitudinal differential rotation of the convection zone plays a significant role in the internal dynamics of the Sun. Using rotational splitting coefficients of the p-mode frequencies, obtained during 1986-1990 at the Big Bear Solar Observatory, we have found that the thickness of the transitional layer is 0.09 +/- 0.04 solar radii (63 +/- 28 Mm), and that most of the transition occurs beneath the adiabatically stratified part of the convection zone, as suggested by the dynamo theories of the 22 yr solar activity cycle.

  16. High School Students as Mentors: Findings from the Big Brothers Big Sisters School-Based Mentoring Impact Study

    ERIC Educational Resources Information Center

    Herrera, Carla; Kauh, Tina J.; Cooney, Siobhan M.; Grossman, Jean Baldwin; McMaken, Jennifer

    2008-01-01

    High schools have recently become a popular source of mentors for school-based mentoring (SBM) programs. The high school Bigs program of Big Brothers Big Sisters of America, for example, currently involves close to 50,000 high-school-aged mentors across the country. While the use of these young mentors has several potential advantages, their age…

  17. Limnology of Big Lake, south-central Alaska, 1983-84

    USGS Publications Warehouse

    Woods, Paul F.

    1992-01-01

    within the epilimnion. An analysis of nitrogen-to-phosphorus ratios showed that nitrogen was the nutrient most likely to limit phytoplankton growth during the summer. Although mean chlorophyll-a concentrations were at oligotrophic levels, concentrations did peak at 46.5 micrograms per liter in the east basin. During each year and in both basins, the peak chlorophyll-a concentrations were measured within the hypolimnion because the euphotic zone commonly was deeper than the epilimnion during the summer. The annual integral primary production of Big Lake in 1984 was 29.6 grams of carbon fixed per square meter with about 90 percent of that produced during May through October. During this time period, the lake received 76 percent of its annual input of solar irradiance. Monthly integral primary production, in milligrams of carbon fixed per square meter, ranged from 1.5 in January to 7,050 in July. When compared with the range of annual integral primary production measured in 50 International Biological Program lakes throughout the world, Big Lake had a low value of annual integral primary production. The results of this study lend credence to the concerns about the potential eutrophication of Big Lake. Increases in the supply of oxygen-demanding materials to Big Lake could worsen the hypolimnetic dissolved-oxygen deficit and possibly shift the lake's trophic state toward mesotrophy or eutrophy.

  18. A kinematic model for the evolution of the Eastern California Shear Zone and Garlock Fault, Mojave Desert, California

    NASA Astrophysics Data System (ADS)

    Dixon, Timothy H.; Xie, Surui

    2018-07-01

    The Eastern California shear zone in the Mojave Desert, California, accommodates nearly a quarter of Pacific-North America plate motion. In south-central Mojave, the shear zone consists of six active faults, with the central Calico fault having the fastest slip rate. However, faults to the east of the Calico fault have larger total offsets. We explain this pattern of slip rate and total offset with a model involving a crustal block (the Mojave Block) that migrates eastward relative to a shear zone at depth whose position and orientation is fixed by the Coachella segment of the San Andreas fault (SAF), southwest of the transpressive "big bend" in the SAF. Both the shear zone and the Garlock fault are assumed to be a direct result of this restraining bend, and consequent strain redistribution. The model explains several aspects of local and regional tectonics, may apply to other transpressive continental plate boundary zones, and may improve seismic hazard estimates in these zones.

  19. Making big sense from big data in toxicology by read-across.

    PubMed

    Hartung, Thomas

    2016-01-01

    Modern information technologies have made big data available in safety sciences, i.e., extremely large data sets that may be analyzed only computationally to reveal patterns, trends and associations. This happens by (1) compilation of large sets of existing data, e.g., as a result of the European REACH regulation, (2) the use of omics technologies and (3) systematic robotized testing in a high-throughput manner. All three approaches and some other high-content technologies leave us with big data--the challenge is now to make big sense of these data. Read-across, i.e., the local similarity-based intrapolation of properties, is gaining momentum with increasing data availability and consensus on how to process and report it. It is predominantly applied to in vivo test data as a gap-filling approach, but can similarly complement other incomplete datasets. Big data are first of all repositories for finding similar substances and ensure that the available data is fully exploited. High-content and high-throughput approaches similarly require focusing on clusters, in this case formed by underlying mechanisms such as pathways of toxicity. The closely connected properties, i.e., structural and biological similarity, create the confidence needed for predictions of toxic properties. Here, a new web-based tool under development called REACH-across, which aims to support and automate structure-based read-across, is presented among others.

  20. [Big data in official statistics].

    PubMed

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  1. Considerations on Geospatial Big Data

    NASA Astrophysics Data System (ADS)

    LIU, Zhen; GUO, Huadong; WANG, Changlin

    2016-11-01

    Geospatial data, as a significant portion of big data, has recently gained the full attention of researchers. However, few researchers focus on the evolution of geospatial data and its scientific research methodologies. When entering into the big data era, fully understanding the changing research paradigm associated with geospatial data will definitely benefit future research on big data. In this paper, we look deep into these issues by examining the components and features of geospatial big data, reviewing relevant scientific research methodologies, and examining the evolving pattern of geospatial data in the scope of the four ‘science paradigms’. This paper proposes that geospatial big data has significantly shifted the scientific research methodology from ‘hypothesis to data’ to ‘data to questions’ and it is important to explore the generality of growing geospatial data ‘from bottom to top’. Particularly, four research areas that mostly reflect data-driven geospatial research are proposed: spatial correlation, spatial analytics, spatial visualization, and scientific knowledge discovery. It is also pointed out that privacy and quality issues of geospatial data may require more attention in the future. Also, some challenges and thoughts are raised for future discussion.

  2. Integration of bio- and geoscience data with the ODM2 standards and software ecosystem for the CZOData and BiG CZ Data projects

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Mayorga, E.; Horsburgh, J. S.; Lehnert, K. A.; Zaslavsky, I.

    2015-12-01

    We have developed a family of solutions to the challenges of integrating diverse data from of biological and geological (BiG) disciplines for Critical Zone (CZ) science. These standards and software solutions have been developed around the new Observations Data Model version 2.0 (ODM2, http://ODM2.org), which was designed as a profile of the Open Geospatial Consortium's (OGC) Observations and Measurements (O&M) standard. The ODM2 standards and software ecosystem has at it's core an information model that balances specificity with flexibility to powerfully and equally serve the needs of multiple dataset types, from multivariate sensor-generated time series to geochemical measurements of specimen hierarchies to multi-dimensional spectral data to biodiversity observations. ODM2 has been adopted as the information model guiding the next generation of cyberinfrastructure development for the Interdisciplinary Earth Data Alliance (http://www.iedadata.org/) and the CUAHSI Water Data Center (https://www.cuahsi.org/wdc). Here we present several components of the ODM2 standards and software ecosystem that were developed specifically to help CZ scientists and their data managers to share and manage data through the national Critical Zone Observatory data integration project (CZOData, http://criticalzone.org/national/data/) and the bio integration with geo for critical zone science data project (BiG CZ Data, http://bigcz.org/). These include the ODM2 Controlled Vocabulary system (http://vocabulary.odm2.org), the YAML Observation Data Archive & exchange (YODA) File Format (https://github.com/ODM2/YODA-File) and the BiG CZ Toolbox, which will combine easy-to-install ODM2 databases (https://github.com/ODM2/ODM2) with a variety of graphical software packages for data management such as ODMTools (https://github.com/ODM2/ODMToolsPython) and the ODM2 Streaming Data Loader (https://github.com/ODM2/ODM2StreamingDataLoader).

  3. Big-Leaf Mahogany on CITES Appendix II: Big Challenge, Big Opportunity

    Treesearch

    JAMES GROGAN; PAULO BARRETO

    2005-01-01

    On 15 November 2003, big-leaf mahogany (Swietenia macrophylla King, Meliaceae), the most valuable widely traded Neotropical timber tree, gained strengthened regulatory protection from its listing on Appendix II of the Convention on International Trade in Endangered Species ofWild Fauna and Flora (CITES). CITES is a United Nations-chartered agreement signed by 164...

  4. Big Data in Medicine is Driving Big Changes

    PubMed Central

    Verspoor, K.

    2014-01-01

    Summary Objectives To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies. PMID:25123716

  5. Health Informatics Scientists' Perception About Big Data Technology.

    PubMed

    Minou, John; Routsis, Fotios; Gallos, Parisis; Mantas, John

    2017-01-01

    The aim of this paper is to present the perceptions of the Health Informatics Scientists about the Big Data Technology in Healthcare. An empirical study was conducted among 46 scientists to assess their knowledge about the Big Data Technology and their perceptions about using this technology in healthcare. Based on the study findings, 86.7% of the scientists had knowledge of Big data Technology. Furthermore, 59.1% of the scientists believed that Big Data Technology refers to structured data. Additionally, 100% of the population believed that Big Data Technology can be implemented in Healthcare. Finally, the majority does not know any cases of use of Big Data Technology in Greece while 57,8% of the them mentioned that they knew use cases of the Big Data Technology abroad.

  6. Harnessing the Power of Big Data to Improve Graduate Medical Education: Big Idea or Bust?

    PubMed

    Arora, Vineet M

    2018-06-01

    With the advent of electronic medical records (EMRs) fueling the rise of big data, the use of predictive analytics, machine learning, and artificial intelligence are touted as transformational tools to improve clinical care. While major investments are being made in using big data to transform health care delivery, little effort has been directed toward exploiting big data to improve graduate medical education (GME). Because our current system relies on faculty observations of competence, it is not unreasonable to ask whether big data in the form of clinical EMRs and other novel data sources can answer questions of importance in GME such as when is a resident ready for independent practice.The timing is ripe for such a transformation. A recent National Academy of Medicine report called for reforms to how GME is delivered and financed. While many agree on the need to ensure that GME meets our nation's health needs, there is little consensus on how to measure the performance of GME in meeting this goal. During a recent workshop at the National Academy of Medicine on GME outcomes and metrics in October 2017, a key theme emerged: Big data holds great promise to inform GME performance at individual, institutional, and national levels. In this Invited Commentary, several examples are presented, such as using big data to inform clinical experience and provide clinically meaningful data to trainees, and using novel data sources, including ambient data, to better measure the quality of GME training.

  7. A SWOT Analysis of Big Data

    ERIC Educational Resources Information Center

    Ahmadi, Mohammad; Dileepan, Parthasarati; Wheatley, Kathleen K.

    2016-01-01

    This is the decade of data analytics and big data, but not everyone agrees with the definition of big data. Some researchers see it as the future of data analysis, while others consider it as hype and foresee its demise in the near future. No matter how it is defined, big data for the time being is having its glory moment. The most important…

  8. A survey of big data research

    PubMed Central

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang, Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions. PMID:26504265

  9. Software Architecture for Big Data Systems

    DTIC Science & Technology

    2014-03-27

    Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University Software Architecture for Big Data Systems...AND SUBTITLE Software Architecture for Big Data Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...ih - . Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University WHAT IS BIG DATA ? FROM A SOFTWARE

  10. 78 FR 3911 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-17

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N259; FXRS1265030000-134-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive... significant impact (FONSI) for the environmental assessment (EA) for Big Stone National Wildlife Refuge...

  11. Big Domains Are Novel Ca2+-Binding Modules: Evidences from Big Domains of Leptospira Immunoglobulin-Like (Lig) Proteins

    PubMed Central

    Palaniappan, Raghavan U. M.; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P.; Sharma, Yogendra; Chang, Yung-Fu

    2010-01-01

    Background Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca2+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca2+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. Principal Findings We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9th (Lig A9) and 10th repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca2+ with dissociation constants of 2–4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. Conclusions We demonstrate that the Lig are Ca2+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca2+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca2+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca2+ binding. PMID:21206924

  12. Big sagebrush seed bank densities following wildfires

    USDA-ARS?s Scientific Manuscript database

    Big sagebrush (Artemisia spp.) is a critical shrub to many wildlife species including sage grouse (Centrocercus urophasianus), mule deer (Odocoileus hemionus), and pygmy rabbit (Brachylagus idahoensis). Big sagebrush is killed by wildfires and big sagebrush seed is generally short-lived and do not s...

  13. Geohydrology of Big Bear Valley, California: phase 1--geologic framework, recharge, and preliminary assessment of the source and age of groundwater

    USGS Publications Warehouse

    Flint, Lorraine E.; Brandt, Justin; Christensen, Allen H.; Flint, Alan L.; Hevesi, Joseph A.; Jachens, Robert; Kulongoski, Justin T.; Martin, Peter; Sneed, Michelle

    2012-01-01

    Big Bear Valley. The INFILv3 model was modified for this study to include a perched zone beneath the root zone to better simulate lateral seepage and recharge in the shallow subsurface in mountainous terrain. The climate input used in the INFILv3 model was developed by using daily climate data from 84 National Climatic Data Center stations and published Parameter Regression on Independent Slopes Model (PRISM) average monthly precipitation maps to match the drier average monthly precipitation measured in the Baldwin Lake drainage basin. This model resulted in a good representation of localized rain-shadow effects and calibrated well to measured lake volumes at Big Bear and Baldwin Lakes. The simulated average annual recharge was about 5,480 acre-ft/yr in the Big Bear study area, with about 2,800 acre-ft/yr in the Big Bear Lake surface-water drainage basin and about 2,680 acre-ft/yr in the Baldwin Lake surface-water drainage basin. One spring and eight wells were sampled and analyzed for chemical and isotopic data in 2005 and 2006 to determine if isotopic techniques could be used to assess the sources and ages of groundwater in the Big Bear Valley. This approach showed that the predominant source of recharge to the Big Bear Valley is winter precipitation falling on the surrounding mountains. The tritium and uncorrected carbon-14 ages of samples collected from wells for this study indicated that the groundwater basin contains water of different ages, ranging from modern to about 17,200-years old.The results of these investigations provide an understanding of the lateral and vertical extent of the groundwater basin, the spatial distribution of groundwater recharge, the processes responsible for the recharge, and the source and age of groundwater in the groundwater basin. Although the studies do not provide an understanding of the detailed water-bearing properties necessary to determine the groundwater availability of the basin, they do provide a framework for the future

  14. Surveying alignment-free features for Ortholog detection in related yeast proteomes by using supervised big data classifiers.

    PubMed

    Galpert, Deborah; Fernández, Alberto; Herrera, Francisco; Antunes, Agostinho; Molina-Ruiz, Reinaldo; Agüero-Chapin, Guillermin

    2018-05-03

    The development of new ortholog detection algorithms and the improvement of existing ones are of major importance in functional genomics. We have previously introduced a successful supervised pairwise ortholog classification approach implemented in a big data platform that considered several pairwise protein features and the low ortholog pair ratios found between two annotated proteomes (Galpert, D et al., BioMed Research International, 2015). The supervised models were built and tested using a Saccharomycete yeast benchmark dataset proposed by Salichos and Rokas (2011). Despite several pairwise protein features being combined in a supervised big data approach; they all, to some extent were alignment-based features and the proposed algorithms were evaluated on a unique test set. Here, we aim to evaluate the impact of alignment-free features on the performance of supervised models implemented in the Spark big data platform for pairwise ortholog detection in several related yeast proteomes. The Spark Random Forest and Decision Trees with oversampling and undersampling techniques, and built with only alignment-based similarity measures or combined with several alignment-free pairwise protein features showed the highest classification performance for ortholog detection in three yeast proteome pairs. Although such supervised approaches outperformed traditional methods, there were no significant differences between the exclusive use of alignment-based similarity measures and their combination with alignment-free features, even within the twilight zone of the studied proteomes. Just when alignment-based and alignment-free features were combined in Spark Decision Trees with imbalance management, a higher success rate (98.71%) within the twilight zone could be achieved for a yeast proteome pair that underwent a whole genome duplication. The feature selection study showed that alignment-based features were top-ranked for the best classifiers while the runners-up were

  15. Epidemiology in wonderland: Big Data and precision medicine.

    PubMed

    Saracci, Rodolfo

    2018-03-01

    Big Data and precision medicine, two major contemporary challenges for epidemiology, are critically examined from two different angles. In Part 1 Big Data collected for research purposes (Big research Data) and Big Data used for research although collected for other primary purposes (Big secondary Data) are discussed in the light of the fundamental common requirement of data validity, prevailing over "bigness". Precision medicine is treated developing the key point that high relative risks are as a rule required to make a variable or combination of variables suitable for prediction of disease occurrence, outcome or response to treatment; the commercial proliferation of allegedly predictive tests of unknown or poor validity is commented. Part 2 proposes a "wise epidemiology" approach to: (a) choosing in a context imprinted by Big Data and precision medicine-epidemiological research projects actually relevant to population health, (b) training epidemiologists, (c) investigating the impact on clinical practices and doctor-patient relation of the influx of Big Data and computerized medicine and (d) clarifying whether today "health" may be redefined-as some maintain in purely technological terms.

  16. Big Data and Analytics in Healthcare.

    PubMed

    Tan, S S-L; Gao, G; Koch, S

    2015-01-01

    This editorial is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". The amount of data being generated in the healthcare industry is growing at a rapid rate. This has generated immense interest in leveraging the availability of healthcare data (and "big data") to improve health outcomes and reduce costs. However, the nature of healthcare data, and especially big data, presents unique challenges in processing and analyzing big data in healthcare. This Focus Theme aims to disseminate some novel approaches to address these challenges. More specifically, approaches ranging from efficient methods of processing large clinical data to predictive models that could generate better predictions from healthcare data are presented.

  17. "Big data" in economic history.

    PubMed

    Gutmann, Myron P; Merchant, Emily Klancher; Roberts, Evan

    2018-03-01

    Big data is an exciting prospect for the field of economic history, which has long depended on the acquisition, keying, and cleaning of scarce numerical information about the past. This article examines two areas in which economic historians are already using big data - population and environment - discussing ways in which increased frequency of observation, denser samples, and smaller geographic units allow us to analyze the past with greater precision and often to track individuals, places, and phenomena across time. We also explore promising new sources of big data: organically created economic data, high resolution images, and textual corpora.

  18. Big Data and Ambulatory Care

    PubMed Central

    Thorpe, Jane Hyatt; Gray, Elizabeth Alexandra

    2015-01-01

    Big data is heralded as having the potential to revolutionize health care by making large amounts of data available to support care delivery, population health, and patient engagement. Critics argue that big data's transformative potential is inhibited by privacy requirements that restrict health information exchange. However, there are a variety of permissible activities involving use and disclosure of patient information that support care delivery and management. This article presents an overview of the legal framework governing health information, dispels misconceptions about privacy regulations, and highlights how ambulatory care providers in particular can maximize the utility of big data to improve care. PMID:25401945

  19. Big Data Knowledge in Global Health Education.

    PubMed

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  20. Fifty years of shear zones

    NASA Astrophysics Data System (ADS)

    Graham, Rodney

    2017-04-01

    temperature shear zones with flaser gabbro and amphibolitization must have been developed at deeper levels in the shear zone and 'dragged upwards'. An attempt to justify these assertions will made using outcrop exsmples and some deep Seismic data John Ramsay was always cautious about up-scaling and indulging in large scale tectonic speculations, but without his geometric acumen the big scale picture would have been even less clear. Ramsay, J.G. and Graham, R.H., 1970. Strain variation in shear belts. Canadian Journal of Earth Sciences, 7(3), pp.786-813.

  1. Big data for bipolar disorder.

    PubMed

    Monteith, Scott; Glenn, Tasha; Geddes, John; Whybrow, Peter C; Bauer, Michael

    2016-12-01

    The delivery of psychiatric care is changing with a new emphasis on integrated care, preventative measures, population health, and the biological basis of disease. Fundamental to this transformation are big data and advances in the ability to analyze these data. The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events. The primary sources of big data today are electronic medical records (EMR), claims, and registry data from providers and payers. In the near future, data created by patients from active monitoring, passive monitoring of Internet and smartphone activities, and from sensors may be integrated with the EMR. Diverse data sources from outside of medicine, such as government financial data, will be linked for research. Over the long term, genetic and imaging data will be integrated with the EMR, and there will be more emphasis on predictive models. Many technical challenges remain when analyzing big data that relates to size, heterogeneity, complexity, and unstructured text data in the EMR. Human judgement and subject matter expertise are critical parts of big data analysis, and the active participation of psychiatrists is needed throughout the analytical process.

  2. GEOSS: Addressing Big Data Challenges

    NASA Astrophysics Data System (ADS)

    Nativi, S.; Craglia, M.; Ochiai, O.

    2014-12-01

    In the sector of Earth Observation, the explosion of data is due to many factors including: new satellite constellations, the increased capabilities of sensor technologies, social media, crowdsourcing, and the need for multidisciplinary and collaborative research to face Global Changes. In this area, there are many expectations and concerns about Big Data. Vendors have attempted to use this term for their commercial purposes. It is necessary to understand whether Big Data is a radical shift or an incremental change for the existing digital infrastructures. This presentation tries to explore and discuss the impact of Big Data challenges and new capabilities on the Global Earth Observation System of Systems (GEOSS) and particularly on its common digital infrastructure called GCI. GEOSS is a global and flexible network of content providers allowing decision makers to access an extraordinary range of data and information at their desk. The impact of the Big Data dimensionalities (commonly known as 'V' axes: volume, variety, velocity, veracity, visualization) on GEOSS is discussed. The main solutions and experimentation developed by GEOSS along these axes are introduced and analyzed. GEOSS is a pioneering framework for global and multidisciplinary data sharing in the Earth Observation realm; its experience on Big Data is valuable for the many lessons learned.

  3. Big Questions: Missing Antimatter

    ScienceCinema

    Lincoln, Don

    2018-06-08

    Einstein's equation E = mc2 is often said to mean that energy can be converted into matter. More accurately, energy can be converted to matter and antimatter. During the first moments of the Big Bang, the universe was smaller, hotter and energy was everywhere. As the universe expanded and cooled, the energy converted into matter and antimatter. According to our best understanding, these two substances should have been created in equal quantities. However when we look out into the cosmos we see only matter and no antimatter. The absence of antimatter is one of the Big Mysteries of modern physics. In this video, Fermilab's Dr. Don Lincoln explains the problem, although doesn't answer it. The answer, as in all Big Mysteries, is still unknown and one of the leading research topics of contemporary science.

  4. Big data in biomedicine.

    PubMed

    Costa, Fabricio F

    2014-04-01

    The increasing availability and growth rate of biomedical information, also known as 'big data', provides an opportunity for future personalized medicine programs that will significantly improve patient care. Recent advances in information technology (IT) applied to biomedicine are changing the landscape of privacy and personal information, with patients getting more control of their health information. Conceivably, big data analytics is already impacting health decisions and patient care; however, specific challenges need to be addressed to integrate current discoveries into medical practice. In this article, I will discuss the major breakthroughs achieved in combining omics and clinical health data in terms of their application to personalized medicine. I will also review the challenges associated with using big data in biomedicine and translational science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Big Data’s Role in Precision Public Health

    PubMed Central

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091

  6. Big data in forensic science and medicine.

    PubMed

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  7. Big Data and Perioperative Nursing.

    PubMed

    Westra, Bonnie L; Peterson, Jessica J

    2016-10-01

    Big data are large volumes of digital data that can be collected from disparate sources and are challenging to analyze. These data are often described with the five "Vs": volume, velocity, variety, veracity, and value. Perioperative nurses contribute to big data through documentation in the electronic health record during routine surgical care, and these data have implications for clinical decision making, administrative decisions, quality improvement, and big data science. This article explores methods to improve the quality of perioperative nursing data and provides examples of how these data can be combined with broader nursing data for quality improvement. We also discuss a national action plan for nursing knowledge and big data science and how perioperative nurses can engage in collaborative actions to transform health care. Standardized perioperative nursing data has the potential to affect care far beyond the original patient. Copyright © 2016 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  8. Modeling in Big Data Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Szymczak, Samantha; Gunning, Dave

    Human-Centered Big Data Research (HCBDR) is an area of work that focuses on the methodologies and research areas focused on understanding how humans interact with “big data”. In the context of this paper, we refer to “big data” in a holistic sense, including most (if not all) the dimensions defining the term, such as complexity, variety, velocity, veracity, etc. Simply put, big data requires us as researchers of to question and reconsider existing approaches, with the opportunity to illuminate new kinds of insights that were traditionally out of reach to humans. The purpose of this article is to summarize themore » discussions and ideas about the role of models in HCBDR at a recent workshop. Models, within the context of this paper, include both computational and conceptual mental models. As such, the discussions summarized in this article seek to understand the connection between these two categories of models.« less

  9. NASA's Big Data Task Force

    NASA Astrophysics Data System (ADS)

    Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.

    2017-12-01

    Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.

  10. Big Data Technologies

    PubMed Central

    Bellazzi, Riccardo; Dagliati, Arianna; Sacchi, Lucia; Segagni, Daniele

    2015-01-01

    The so-called big data revolution provides substantial opportunities to diabetes management. At least 3 important directions are currently of great interest. First, the integration of different sources of information, from primary and secondary care to administrative information, may allow depicting a novel view of patient’s care processes and of single patient’s behaviors, taking into account the multifaceted nature of chronic care. Second, the availability of novel diabetes technologies, able to gather large amounts of real-time data, requires the implementation of distributed platforms for data analysis and decision support. Finally, the inclusion of geographical and environmental information into such complex IT systems may further increase the capability of interpreting the data gathered and extract new knowledge from them. This article reviews the main concepts and definitions related to big data, it presents some efforts in health care, and discusses the potential role of big data in diabetes care. Finally, as an example, it describes the research efforts carried on in the MOSAIC project, funded by the European Commission. PMID:25910540

  11. The Berlin Inventory of Gambling behavior - Screening (BIG-S): Validation using a clinical sample.

    PubMed

    Wejbera, Martin; Müller, Kai W; Becker, Jan; Beutel, Manfred E

    2017-05-18

    Published diagnostic questionnaires for gambling disorder in German are either based on DSM-III criteria or focus on aspects other than life time prevalence. This study was designed to assess the usability of the DSM-IV criteria based Berlin Inventory of Gambling Behavior Screening tool in a clinical sample and adapt it to DSM-5 criteria. In a sample of 432 patients presenting for behavioral addiction assessment at the University Medical Center Mainz, we checked the screening tool's results against clinical diagnosis and compared a subsample of n=300 clinically diagnosed gambling disorder patients with a comparison group of n=132. The BIG-S produced a sensitivity of 99.7% and a specificity of 96.2%. The instrument's unidimensionality and the diagnostic improvements of DSM-5 criteria were verified by exploratory and confirmatory factor analysis as well as receiver operating characteristic analysis. The BIG-S is a reliable and valid screening tool for gambling disorder and demonstrated its concise and comprehensible operationalization of current DSM-5 criteria in a clinical setting.

  12. Traffic information computing platform for big data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  13. 78 FR 29289 - Safety Zone; Big Bay Boom, San Diego Bay, San Diego, CA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-20

    ... provide for the safety of the crew, spectators, and other users and vessels of the waterway. Persons and... submit comments by mail and would like to know that they reached the Facility, please enclose a stamped... Independence Day Fireworks Display. The safety zones will include all navigable waters within 1,000 feet of...

  14. Quantum nature of the big bang.

    PubMed

    Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parampreet

    2006-04-14

    Some long-standing issues concerning the quantum nature of the big bang are resolved in the context of homogeneous isotropic models with a scalar field. Specifically, the known results on the resolution of the big-bang singularity in loop quantum cosmology are significantly extended as follows: (i) the scalar field is shown to serve as an internal clock, thereby providing a detailed realization of the "emergent time" idea; (ii) the physical Hilbert space, Dirac observables, and semiclassical states are constructed rigorously; (iii) the Hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce. Thanks to the nonperturbative, background independent methods, unlike in other approaches the quantum evolution is deterministic across the deep Planck regime.

  15. Mentoring in Schools: An Impact Study of Big Brothers Big Sisters School-Based Mentoring

    ERIC Educational Resources Information Center

    Herrera, Carla; Grossman, Jean Baldwin; Kauh, Tina J.; McMaken, Jennifer

    2011-01-01

    This random assignment impact study of Big Brothers Big Sisters School-Based Mentoring involved 1,139 9- to 16-year-old students in 10 cities nationwide. Youth were randomly assigned to either a treatment group (receiving mentoring) or a control group (receiving no mentoring) and were followed for 1.5 school years. At the end of the first school…

  16. Big data processing in the cloud - Challenges and platforms

    NASA Astrophysics Data System (ADS)

    Zhelev, Svetoslav; Rozeva, Anna

    2017-12-01

    Choosing the appropriate architecture and technologies for a big data project is a difficult task, which requires extensive knowledge in both the problem domain and in the big data landscape. The paper analyzes the main big data architectures and the most widely implemented technologies used for processing and persisting big data. Clouds provide for dynamic resource scaling, which makes them a natural fit for big data applications. Basic cloud computing service models are presented. Two architectures for processing big data are discussed, Lambda and Kappa architectures. Technologies for big data persistence are presented and analyzed. Stream processing as the most important and difficult to manage is outlined. The paper highlights main advantages of cloud and potential problems.

  17. Ethics and Epistemology in Big Data Research.

    PubMed

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian; Ioannidis, John P A

    2017-12-01

    Biomedical innovation and translation are increasingly emphasizing research using "big data." The hope is that big data methods will both speed up research and make its results more applicable to "real-world" patients and health services. While big data research has been embraced by scientists, politicians, industry, and the public, numerous ethical, organizational, and technical/methodological concerns have also been raised. With respect to technical and methodological concerns, there is a view that these will be resolved through sophisticated information technologies, predictive algorithms, and data analysis techniques. While such advances will likely go some way towards resolving technical and methodological issues, we believe that the epistemological issues raised by big data research have important ethical implications and raise questions about the very possibility of big data research achieving its goals.

  18. Big Questions: Missing Antimatter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lincoln, Don

    2013-08-27

    Einstein's equation E = mc2 is often said to mean that energy can be converted into matter. More accurately, energy can be converted to matter and antimatter. During the first moments of the Big Bang, the universe was smaller, hotter and energy was everywhere. As the universe expanded and cooled, the energy converted into matter and antimatter. According to our best understanding, these two substances should have been created in equal quantities. However when we look out into the cosmos we see only matter and no antimatter. The absence of antimatter is one of the Big Mysteries of modern physics.more » In this video, Fermilab's Dr. Don Lincoln explains the problem, although doesn't answer it. The answer, as in all Big Mysteries, is still unknown and one of the leading research topics of contemporary science.« less

  19. A Great Year for the Big Blue Water

    NASA Astrophysics Data System (ADS)

    Leinen, M.

    2016-12-01

    It has been a great year for the big blue water. Last year the 'United_Nations' decided that it would focus on long time remain alright for the big blue water as one of its 'Millenium_Development_Goals'. This is new. In the past the big blue water was never even considered as a part of this world long time remain alright push. Also, last year the big blue water was added to the words of the group of world people paper #21 on cooling the air and things. It is hard to believe that the big blue water was not in the paper before because 70% of the world is covered by the big blue water! Many people at the group of world meeting were from our friends at 'AGU'.

  20. Real-Time Information Extraction from Big Data

    DTIC Science & Technology

    2015-10-01

    I N S T I T U T E F O R D E F E N S E A N A L Y S E S Real-Time Information Extraction from Big Data Robert M. Rolfe...Information Extraction from Big Data Jagdeep Shah Robert M. Rolfe Francisco L. Loaiza-Lemos October 7, 2015 I N S T I T U T E F O R D E F E N S E...AN A LY S E S Abstract We are drowning under the 3 Vs (volume, velocity and variety) of big data . Real-time information extraction from big

  1. Water resources of the Lake Traverse Reservation, South and North Dakota, and Roberts County, South Dakota

    USGS Publications Warehouse

    Thompson, Ryan F.

    2001-01-01

    In 1994, the U.S. Geological Survey, in cooperation with the Sisseton-Wahpeton Sioux Tribe; Roberts County; and the South Dakota Department of Environment and Natural Resources, Geological Survey Program, began a 6-year investigation to describe and quantify the water resources of the area within the 1867 boundary of the Lake Traverse Reservation and adjacent parts of Roberts County. Roberts County is located in extreme northeastern South Dakota, and the 1867 boundary of the Lake Traverse Reservation encompasses much of Roberts County and parts of Marshall, Day, Codington, and Grant Counties in South Dakota and parts of Richland and Sargent Counties in southeast North Dakota. This report includes descriptions of the quantity, quality, and availability of surface and ground water, the extent of the major glacial and bedrock aquifers and named outwash groups, and surface- and ground-water uses within the 1867 boundary of the Lake Traverse Reservation and adjacent parts of Roberts County. The surface-water resources within the 1867 boundary of the Lake Traverse Reservation and adjacent parts of Roberts County include rivers, streams, lakes, and wetlands. The Wild Rice and Bois de Sioux Rivers are tributaries of the Red River within the Souris-Red-Rainy River Basin; the Little Minnesota, Jorgenson, and North Fork Whetstone Rivers are tributaries of the Minnesota River within the Upper Mississippi River Basin, and the James and Big Sioux Rivers are tributaries within the Missouri River Basin. Several of the larger lakes within the study area have been developed for recreation, while many of the smaller lakes and wetlands are used for livestock watering or as wildlife production areas. Statistical summaries are presented for the water-quality data of six selected streams within the study area, and the dominant chemical species are listed for 17 selected lakes within the study area. The glacial history of the study area has led to a rather complex system of glacial

  2. Big data and biomedical informatics: a challenging opportunity.

    PubMed

    Bellazzi, R

    2014-05-22

    Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations.

  3. Think Big, Bigger ... and Smaller

    ERIC Educational Resources Information Center

    Nisbett, Richard E.

    2010-01-01

    One important principle of social psychology, writes Nisbett, is that some big-seeming interventions have little or no effect. This article discusses a number of cases from the field of education that confirm this principle. For example, Head Start seems like a big intervention, but research has indicated that its effects on academic achievement…

  4. Personality and job performance: the Big Five revisited.

    PubMed

    Hurtz, G M; Donovan, J J

    2000-12-01

    Prior meta-analyses investigating the relation between the Big 5 personality dimensions and job performance have all contained a threat to construct validity, in that much of the data included within these analyses was not derived from actual Big 5 measures. In addition, these reviews did not address the relations between the Big 5 and contextual performance. Therefore, the present study sought to provide a meta-analytic estimate of the criterion-related validity of explicit Big 5 measures for predicting job performance and contextual performance. The results for job performance closely paralleled 2 of the previous meta-analyses, whereas analyses with contextual performance showed more complex relations among the Big 5 and performance. A more critical interpretation of the Big 5-performance relationship is presented, and suggestions for future research aimed at enhancing the validity of personality predictors are provided.

  5. Adding Big Data Analytics to GCSS-MC

    DTIC Science & Technology

    2014-09-30

    TERMS Big Data , Hadoop , MapReduce, GCSS-MC 15. NUMBER OF PAGES 93 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY...10 2.5 Hadoop . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3 The Experiment Design 23 3.1 Why Add a Big Data Element...23 3.2 Adding a Big Data Element to GCSS-MC . . . . . . . . . . . . . . 24 3.3 Building a Hadoop Cluster

  6. Ethics and Epistemology of Big Data.

    PubMed

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian

    2017-12-01

    In this Symposium on the Ethics and Epistemology of Big Data, we present four perspectives on the ways in which the rapid growth in size of research databanks-i.e. their shift into the realm of "big data"-has changed their moral, socio-political, and epistemic status. While there is clearly something different about "big data" databanks, we encourage readers to place the arguments presented in this Symposium in the context of longstanding debates about the ethics, politics, and epistemology of biobank, database, genetic, and epidemiological research.

  7. The challenges of big data.

    PubMed

    Mardis, Elaine R

    2016-05-01

    The largely untapped potential of big data analytics is a feeding frenzy that has been fueled by the production of many next-generation-sequencing-based data sets that are seeking to answer long-held questions about the biology of human diseases. Although these approaches are likely to be a powerful means of revealing new biological insights, there are a number of substantial challenges that currently hamper efforts to harness the power of big data. This Editorial outlines several such challenges as a means of illustrating that the path to big data revelations is paved with perils that the scientific community must overcome to pursue this important quest. © 2016. Published by The Company of Biologists Ltd.

  8. Rural Development: Part 3, (1) Balanced National Growth Policy; (2) National Rural Development Program; (3) S. 1612, The Rural Community Development Revenue Sharing Act of 1971; (4) Reorganization of U.S. Department of Agriculture and Related Agencies. Hearings Before the Subcommittee on Rural Development of the Committee on Agriculture and Forestry, 92d Congress, 1st Session, May 3, 1971, Sioux City, Iowa; May 4, 1971 Vermillion, ....

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on Agriculture, Nutrition, and Forestry.

    Transcripts of the 1971 Senate hearings on rural development (held in Sioux City, Iowa; Montgomery, Alabama; Vermillion, South Dakota; and Tifton, Georgia) are presented in this document. Derived from many sources representing the varied interests of each host State, representative testimony includes that of: city and state officials; university…

  9. Big³. Editorial.

    PubMed

    Lehmann, C U; Séroussi, B; Jaulent, M-C

    2014-05-22

    To provide an editorial introduction into the 2014 IMIA Yearbook of Medical Informatics with an overview of the content, the new publishing scheme, and upcoming 25th anniversary. A brief overview of the 2014 special topic, Big Data - Smart Health Strategies, and an outline of the novel publishing model is provided in conjunction with a call for proposals to celebrate the 25th anniversary of the Yearbook. 'Big Data' has become the latest buzzword in informatics and promise new approaches and interventions that can improve health, well-being, and quality of life. This edition of the Yearbook acknowledges the fact that we just started to explore the opportunities that 'Big Data' will bring. However, it will become apparent to the reader that its pervasive nature has invaded all aspects of biomedical informatics - some to a higher degree than others. It was our goal to provide a comprehensive view at the state of 'Big Data' today, explore its strengths and weaknesses, as well as its risks, discuss emerging trends, tools, and applications, and stimulate the development of the field through the aggregation of excellent survey papers and working group contributions to the topic. For the first time in history will the IMIA Yearbook be published in an open access online format allowing a broader readership especially in resource poor countries. For the first time, thanks to the online format, will the IMIA Yearbook be published twice in the year, with two different tracks of papers. We anticipate that the important role of the IMIA yearbook will further increase with these changes just in time for its 25th anniversary in 2016.

  10. The Big Read: Case Studies

    ERIC Educational Resources Information Center

    National Endowment for the Arts, 2009

    2009-01-01

    The Big Read evaluation included a series of 35 case studies designed to gather more in-depth information on the program's implementation and impact. The case studies gave readers a valuable first-hand look at The Big Read in context. Both formal and informal interviews, focus groups, attendance at a wide range of events--all showed how…

  11. Geology of Precambrian rocks and isotope geochemistry of shear zones in the Big Narrows area, northern Front Range, Colorado

    USGS Publications Warehouse

    Abbott, Jeffrey T.

    1970-01-01

    Rocks within the Big Narrows and Poudre Park quadrangles located in the northern Front Range of Colorado are Precambrian metasedimentary and metaigneous schists and gneisses and plutonic igneous rocks. These are locally mantled by extensive late Tertiary and Quaternary fluvial gravels. The southern boundary of the Log Cabin batholith lies within the area studied. A detailed chronology of polyphase deformation, metamorphism and plutonism has been established. Early isoclinal folding (F1) was followed by a major period of plastic deformation (F2), sillimanite-microcline grade regional metamorphism, migmatization and synkinematic Boulder Creek granodiorite plutonism (1.7 b.y.). Macroscopic doubly plunging antiformal and synformal structures were developed. P-T conditions at the peak of metamorphism were probably about 670?C and 4.5 Kb. Water pressures may locally have differed from load pressures. The 1.4 b.y. Silver Plume granite plutonism was post kinematic and on the basis of petrographic and field criteria can be divided into three facies. Emplacement was by forcible injection and assimilation. Microscopic and mesoscopic folds which postdate the formation of the characteristic mineral phases during the 1.7 b.y. metamorphism are correlated with the emplacement of the Silver Plume Log Cabin batholith. Extensive retrograde metamorphism was associated with this event. A major period of mylonitization postdates Silver Plume plutonism and produced large E-W and NE trending shear zones. A detailed study of the Rb/Sr isotope geochemistry of the layered mylonites demonstrated that the mylonitization and associated re- crystallization homogenized the Rb87/Sr 86 ratios. Whole-rock dating techniques applied to the layered mylonites indicate a probable age of 1.2 b.y. Petrographic studies suggest that the mylonitization-recrystallization process produced hornfels facies assemblages in the adjacent metasediments. Minor Laramide faulting, mineralization and igneous activity

  12. Seed bank and big sagebrush plant community composition in a range margin for big sagebrush

    USGS Publications Warehouse

    Martyn, Trace E.; Bradford, John B.; Schlaepfer, Daniel R.; Burke, Ingrid C.; Laurenroth, William K.

    2016-01-01

    The potential influence of seed bank composition on range shifts of species due to climate change is unclear. Seed banks can provide a means of both species persistence in an area and local range expansion in the case of increasing habitat suitability, as may occur under future climate change. However, a mismatch between the seed bank and the established plant community may represent an obstacle to persistence and expansion. In big sagebrush (Artemisia tridentata) plant communities in Montana, USA, we compared the seed bank to the established plant community. There was less than a 20% similarity in the relative abundance of species between the established plant community and the seed bank. This difference was primarily driven by an overrepresentation of native annual forbs and an underrepresentation of big sagebrush in the seed bank compared to the established plant community. Even though we expect an increase in habitat suitability for big sagebrush under future climate conditions at our sites, the current mismatch between the plant community and the seed bank could impede big sagebrush range expansion into increasingly suitable habitat in the future.

  13. Application and Prospect of Big Data in Water Resources

    NASA Astrophysics Data System (ADS)

    Xi, Danchi; Xu, Xinyi

    2017-04-01

    Because of developed information technology and affordable data storage, we h ave entered the era of data explosion. The term "Big Data" and technology relate s to it has been created and commonly applied in many fields. However, academic studies just got attention on Big Data application in water resources recently. As a result, water resource Big Data technology has not been fully developed. This paper introduces the concept of Big Data and its key technologies, including the Hadoop system and MapReduce. In addition, this paper focuses on the significance of applying the big data in water resources and summarizing prior researches by others. Most studies in this field only set up theoretical frame, but we define the "Water Big Data" and explain its tridimensional properties which are time dimension, spatial dimension and intelligent dimension. Based on HBase, the classification system of Water Big Data is introduced: hydrology data, ecology data and socio-economic data. Then after analyzing the challenges in water resources management, a series of solutions using Big Data technologies such as data mining and web crawler, are proposed. Finally, the prospect of applying big data in water resources is discussed, it can be predicted that as Big Data technology keeps developing, "3D" (Data Driven Decision) will be utilized more in water resources management in the future.

  14. Toward a Literature-Driven Definition of Big Data in Healthcare.

    PubMed

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    The aim of this study was to provide a definition of big data in healthcare. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. A total of 196 papers were included. Big data can be defined as datasets with Log(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data.

  15. Big Data Analytic, Big Step for Patient Management and Care in Puerto Rico.

    PubMed

    Borrero, Ernesto E

    2018-01-01

    This letter provides an overview of the application of big data in health care system to improve quality of care, including predictive modelling for risk and resource use, precision medicine and clinical decision support, quality of care and performance measurement, public health and research applications, among others. The author delineates the tremendous potential for big data analytics and discuss how it can be successfully implemented in clinical practice, as an important component of a learning health-care system.

  16. Big Data and Biomedical Informatics: A Challenging Opportunity

    PubMed Central

    2014-01-01

    Summary Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations. PMID:24853034

  17. Integrating the Apache Big Data Stack with HPC for Big Data

    NASA Astrophysics Data System (ADS)

    Fox, G. C.; Qiu, J.; Jha, S.

    2014-12-01

    There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However, the same is not so true for data intensive computing, even though commercially clouds devote much more resources to data analytics than supercomputers devote to simulations. We look at a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures. We suggest a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks and use these to identify a few key classes of hardware/software architectures. Our analysis builds on combining HPC and ABDS the Apache big data software stack that is well used in modern cloud computing. Initial results on clouds and HPC systems are encouraging. We propose the development of SPIDAL - Scalable Parallel Interoperable Data Analytics Library -- built on system aand data abstractions suggested by the HPC-ABDS architecture. We discuss how it can be used in several application areas including Polar Science.

  18. Issues in Big-Data Database Systems

    DTIC Science & Technology

    2014-06-01

    Post, 18 August 2013. Berman, Jules K. (2013). Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information. New York: Elsevier... Jules K. (2013). Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information. New York: Elsevier. 261pp. Characterization of

  19. Landscape ecological security response to land use change in the tidal flat reclamation zone, China.

    PubMed

    Zhang, Runsen; Pu, Lijie; Li, Jianguo; Zhang, Jing; Xu, Yan

    2016-01-01

    As coastal development becomes a national strategy in Eastern China, land use and landscape patterns have been affected by reclamation projects. In this study, taking Rudong County, China as a typical area, we analyzed land use change and its landscape ecological security responses in the tidal flat reclamation zone. The results show that land use change in the tidal flat reclamation zone is characterized by the replacement of natural tidal flat with agricultural and construction land, which has also led to a big change in landscape patterns. We built a landscape ecological security evaluation system, which consists of landscape interference degree and landscape fragile degree, and then calculated the landscape ecological security change in the tidal flat reclamation zone from 1990 to 2008 to depict the life cycle in tidal flat reclamation. Landscape ecological security exhibited a W-shaped periodicity, including the juvenile stage, growth stage, and maturation stage. Life-cycle analysis demonstrates that 37 years is required for the land use system to transform from a natural ecosystem to an artificial ecosystem in the tidal flat reclamation zone.

  20. WE-H-BRB-00: Big Data in Radiation Oncology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at themore » NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.« less

  1. Epidemiology in the Era of Big Data

    PubMed Central

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-01-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called ‘3 Vs’: variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that, while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field’s future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future. PMID:25756221

  2. Toward a Literature-Driven Definition of Big Data in Healthcare

    PubMed Central

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data. PMID:26137488

  3. Big-Eyed Bugs Have Big Appetite for Pests

    USDA-ARS?s Scientific Manuscript database

    Many kinds of arthropod natural enemies (predators and parasitoids) inhabit crop fields in Arizona and can have a large negative impact on several pest insect species that also infest these crops. Geocoris spp., commonly known as big-eyed bugs, are among the most abundant insect predators in field c...

  4. Big Data - What is it and why it matters.

    PubMed

    Tattersall, Andy; Grant, Maria J

    2016-06-01

    Big data, like MOOCs, altmetrics and open access, is a term that has been commonplace in the library community for some time yet, despite its prevalence, many in the library and information sector remain unsure of the relationship between big data and their roles. This editorial explores what big data could mean for the day-to-day practice of health library and information workers, presenting examples of big data in action, considering the ethics of accessing big data sets and the potential for new roles for library and information workers. © 2016 Health Libraries Group.

  5. Research on information security in big data era

    NASA Astrophysics Data System (ADS)

    Zhou, Linqi; Gu, Weihong; Huang, Cheng; Huang, Aijun; Bai, Yongbin

    2018-05-01

    Big data is becoming another hotspot in the field of information technology after the cloud computing and the Internet of Things. However, the existing information security methods can no longer meet the information security requirements in the era of big data. This paper analyzes the challenges and a cause of data security brought by big data, discusses the development trend of network attacks under the background of big data, and puts forward my own opinions on the development of security defense in technology, strategy and product.

  6. Recharge Area, Base-Flow and Quick-Flow Discharge Rates and Ages, and General Water Quality of Big Spring in Carter County, Missouri, 2000-04

    USGS Publications Warehouse

    Imes, Jeffrey L.; Plummer, Niel; Kleeschulte, Michael J.; Schumacher, John G.

    2007-01-01

    Exploration for lead deposits has occurred in a mature karst area of southeast Missouri that is highly valued for its scenic beauty and recreational opportunities. The area contains the two largest springs in Missouri (Big Spring and Greer Spring), both of which flow into federally designated scenic rivers. Concerns about potential mining effects on the area ground water and aquatic biota prompted an investigation of Big Spring. Water-level measurements made during 2000 helped define the recharge area of Big Spring, Greer Spring, Mammoth Spring, and Boze Mill Spring. The data infer two distinct potentiometric surfaces. The shallow potentiometric surface, where the depth-to-water is less than about 250 feet, tends to mimic topographic features and is strongly controlled by streams. The deep potentiometric surface, where the depth-to-water is greater than about 250 feet represents ground-water hydraulic heads within the more mature karst areas. A highly permeable zone extends about 20 mile west of Big Spring toward the upper Hurricane Creek Basin. Deeper flowing water in the Big Spring recharge area is directed toward this permeable zone. The estimated sizes of the spring recharge areas are 426 square miles for Big Spring, 352 square miles for Greer Spring, 290 square miles for Mammoth Spring, and 54 square miles for Boze Mill Spring. A discharge accumulation curve using Big Spring daily mean discharge data shows no substantial change in the discharge pattern of Big Spring during the period of record (water years 1922 through 2004). The extended periods when the spring flow deviated from the trend line can be attributed to prolonged departures from normal precipitation. The maximum possible instantaneous flow from Big Spring has not been adequately defined because of backwater effects from the Current River during high-flow conditions. Physical constraints within the spring conduit system may restrict its maximum flow. The largest discharge measured at Big Spring

  7. ["Big data" - large data, a lot of knowledge?].

    PubMed

    Hothorn, Torsten

    2015-01-28

    Since a couple of years, the term Big Data describes technologies to extract knowledge from data. Applications of Big Data and their consequences are also increasingly discussed in the mass media. Because medicine is an empirical science, we discuss the meaning of Big Data and its potential for future medical research.

  8. Big Ideas in Primary Mathematics: Issues and Directions

    ERIC Educational Resources Information Center

    Askew, Mike

    2013-01-01

    This article is located within the literature arguing for attention to Big Ideas in teaching and learning mathematics for understanding. The focus is on surveying the literature of Big Ideas and clarifying what might constitute Big Ideas in the primary Mathematics Curriculum based on both theoretical and pragmatic considerations. This is…

  9. Big Data - Smart Health Strategies

    PubMed Central

    2014-01-01

    Summary Objectives To select best papers published in 2013 in the field of big data and smart health strategies, and summarize outstanding research efforts. Methods A systematic search was performed using two major bibliographic databases for relevant journal papers. The references obtained were reviewed in a two-stage process, starting with a blinded review performed by the two section editors, and followed by a peer review process operated by external reviewers recognized as experts in the field. Results The complete review process selected four best papers, illustrating various aspects of the special theme, among them: (a) using large volumes of unstructured data and, specifically, clinical notes from Electronic Health Records (EHRs) for pharmacovigilance; (b) knowledge discovery via querying large volumes of complex (both structured and unstructured) biological data using big data technologies and relevant tools; (c) methodologies for applying cloud computing and big data technologies in the field of genomics, and (d) system architectures enabling high-performance access to and processing of large datasets extracted from EHRs. Conclusions The potential of big data in biomedicine has been pinpointed in various viewpoint papers and editorials. The review of current scientific literature illustrated a variety of interesting methods and applications in the field, but still the promises exceed the current outcomes. As we are getting closer towards a solid foundation with respect to common understanding of relevant concepts and technical aspects, and the use of standardized technologies and tools, we can anticipate to reach the potential that big data offer for personalized medicine and smart health strategies in the near future. PMID:25123721

  10. Big Data Management in US Hospitals: Benefits and Barriers.

    PubMed

    Schaeffer, Chad; Booton, Lawrence; Halleck, Jamey; Studeny, Jana; Coustasse, Alberto

    Big data has been considered as an effective tool for reducing health care costs by eliminating adverse events and reducing readmissions to hospitals. The purposes of this study were to examine the emergence of big data in the US health care industry, to evaluate a hospital's ability to effectively use complex information, and to predict the potential benefits that hospitals might realize if they are successful in using big data. The findings of the research suggest that there were a number of benefits expected by hospitals when using big data analytics, including cost savings and business intelligence. By using big data, many hospitals have recognized that there have been challenges, including lack of experience and cost of developing the analytics. Many hospitals will need to invest in the acquiring of adequate personnel with experience in big data analytics and data integration. The findings of this study suggest that the adoption, implementation, and utilization of big data technology will have a profound positive effect among health care providers.

  11. Big Data in Caenorhabditis elegans: quo vadis?

    PubMed Central

    Hutter, Harald; Moerman, Donald

    2015-01-01

    A clear definition of what constitutes “Big Data” is difficult to identify, but we find it most useful to define Big Data as a data collection that is complete. By this criterion, researchers on Caenorhabditis elegans have a long history of collecting Big Data, since the organism was selected with the idea of obtaining a complete biological description and understanding of development. The complete wiring diagram of the nervous system, the complete cell lineage, and the complete genome sequence provide a framework to phrase and test hypotheses. Given this history, it might be surprising that the number of “complete” data sets for this organism is actually rather small—not because of lack of effort, but because most types of biological experiments are not currently amenable to complete large-scale data collection. Many are also not inherently limited, so that it becomes difficult to even define completeness. At present, we only have partial data on mutated genes and their phenotypes, gene expression, and protein–protein interaction—important data for many biological questions. Big Data can point toward unexpected correlations, and these unexpected correlations can lead to novel investigations; however, Big Data cannot establish causation. As a result, there is much excitement about Big Data, but there is also a discussion on just what Big Data contributes to solving a biological problem. Because of its relative simplicity, C. elegans is an ideal test bed to explore this issue and at the same time determine what is necessary to build a multicellular organism from a single cell. PMID:26543198

  12. [Relevance of big data for molecular diagnostics].

    PubMed

    Bonin-Andresen, M; Smiljanovic, B; Stuhlmüller, B; Sörensen, T; Grützkau, A; Häupl, T

    2018-04-01

    Big data analysis raises the expectation that computerized algorithms may extract new knowledge from otherwise unmanageable vast data sets. What are the algorithms behind the big data discussion? In principle, high throughput technologies in molecular research already introduced big data and the development and application of analysis tools into the field of rheumatology some 15 years ago. This includes especially omics technologies, such as genomics, transcriptomics and cytomics. Some basic methods of data analysis are provided along with the technology, however, functional analysis and interpretation requires adaptation of existing or development of new software tools. For these steps, structuring and evaluating according to the biological context is extremely important and not only a mathematical problem. This aspect has to be considered much more for molecular big data than for those analyzed in health economy or epidemiology. Molecular data are structured in a first order determined by the applied technology and present quantitative characteristics that follow the principles of their biological nature. These biological dependencies have to be integrated into software solutions, which may require networks of molecular big data of the same or even different technologies in order to achieve cross-technology confirmation. More and more extensive recording of molecular processes also in individual patients are generating personal big data and require new strategies for management in order to develop data-driven individualized interpretation concepts. With this perspective in mind, translation of information derived from molecular big data will also require new specifications for education and professional competence.

  13. Big data in psychology: A framework for research advancement.

    PubMed

    Adjerid, Idris; Kelley, Ken

    2018-02-22

    The potential for big data to provide value for psychology is significant. However, the pursuit of big data remains an uncertain and risky undertaking for the average psychological researcher. In this article, we address some of this uncertainty by discussing the potential impact of big data on the type of data available for psychological research, addressing the benefits and most significant challenges that emerge from these data, and organizing a variety of research opportunities for psychology. Our article yields two central insights. First, we highlight that big data research efforts are more readily accessible than many researchers realize, particularly with the emergence of open-source research tools, digital platforms, and instrumentation. Second, we argue that opportunities for big data research are diverse and differ both in their fit for varying research goals, as well as in the challenges they bring about. Ultimately, our outlook for researchers in psychology using and benefiting from big data is cautiously optimistic. Although not all big data efforts are suited for all researchers or all areas within psychology, big data research prospects are diverse, expanding, and promising for psychology and related disciplines. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. 'Big data' in pharmaceutical science: challenges and opportunities.

    PubMed

    Dossetter, Al G; Ecker, Gerhard; Laverty, Hugh; Overington, John

    2014-05-01

    Future Medicinal Chemistry invited a selection of experts to express their views on the current impact of big data in drug discovery and design, as well as speculate on future developments in the field. The topics discussed include the challenges of implementing big data technologies, maintaining the quality and privacy of data sets, and how the industry will need to adapt to welcome the big data era. Their enlightening responses provide a snapshot of the many and varied contributions being made by big data to the advancement of pharmaceutical science.

  15. Sports and the Big6: The Information Advantage.

    ERIC Educational Resources Information Center

    Eisenberg, Mike

    1997-01-01

    Explores the connection between sports and the Big6 information problem-solving process and how sports provides an ideal setting for learning and teaching about the Big6. Topics include information aspects of baseball, football, soccer, basketball, figure skating, track and field, and golf; and the Big6 process applied to sports. (LRW)

  16. Molecular ecology of the big brown bat (Eptesicus fuscus): Genetic and natural history variation in a hybrid zone

    USGS Publications Warehouse

    Neubaum, M.A.; Douglas, M.R.; Douglas, M.E.; O'Shea, T.J.

    2007-01-01

    Several geographically distinct mitochondrial DNA (mtDNA) lineages of the big brown bat (Eptesicus fuscus) have been documented in North America. Individuals from 2 of these lineages, an eastern and a western form, co-occur within maternity colonies in Colorado. The discovery of 2 divergent mtDNA lineages in sympatry prompted a set of questions regarding possible biological differences between haplotypes. We captured big brown bats at maternity roosts in Colorado and recorded data on body size, pelage color, litter size, roosting and overwintering behaviors, and local distributions. Wing biopsies were collected for genetic analysis. The ND2 region of the mtDNA molecule was used to determine lineage of the bats. In addition, nuclear DNA (nDNA) intron 1 of the ??-globin gene was used to determine if mtDNA lineages are hybridizing. Eastern and western mtDNA lineages differed by 10.3% sequence divergence and examination of genetic data suggests recent population expansion for both lineages. Differences in distribution occur along the Colorado Front Range, with an increasing proportion of western haplotypes farther south. Results from nDNA analyses demonstrated hybridization between the 2 lineages. Additionally, no outstanding distinctiveness was found between the mtDNA lineages in natural history characters examined. We speculate that historical climate changes separated this species into isolated eastern and western populations, and that secondary contact with subsequent interbreeding was facilitated by European settlement. ?? 2007 American Society of Mammalogists.

  17. Rare, Intense, Big fires dominate the global tropics under drier conditions.

    PubMed

    Hantson, Stijn; Scheffer, Marten; Pueyo, Salvador; Xu, Chi; Lasslop, Gitta; van Nes, Egbert H; Holmgren, Milena; Mendelsohn, John

    2017-10-30

    Wildfires burn large parts of the tropics every year, shaping ecosystem structure and functioning. Yet the complex interplay between climate, vegetation and human factors that drives fire dynamics is still poorly understood. Here we show that on all continents, except Australia, tropical fire regimes change drastically as mean annual precipitation falls below 550 mm. While the frequency of fires decreases below this threshold, the size and intensity of wildfires rise sharply. This transition to a regime of Rare-Intense-Big fires (RIB-fires) corresponds to the relative disappearance of trees from the landscape. Most dry regions on the globe are projected to become substantially drier under global warming. Our findings suggest a global zone where this drying may have important implications for fire risks to society and ecosystem functioning.

  18. Current applications of big data in obstetric anesthesiology.

    PubMed

    Klumpner, Thomas T; Bauer, Melissa E; Kheterpal, Sachin

    2017-06-01

    The narrative review aims to highlight several recently published 'big data' studies pertinent to the field of obstetric anesthesiology. Big data has been used to study rare outcomes, to identify trends within the healthcare system, to identify variations in practice patterns, and to highlight potential inequalities in obstetric anesthesia care. Big data studies have helped define the risk of rare complications of obstetric anesthesia, such as the risk of neuraxial hematoma in thrombocytopenic parturients. Also, large national databases have been used to better understand trends in anesthesia-related adverse events during cesarean delivery as well as outline potential racial/ethnic disparities in obstetric anesthesia care. Finally, real-time analysis of patient data across a number of disparate health information systems through the use of sophisticated clinical decision support and surveillance systems is one promising application of big data technology on the labor and delivery unit. 'Big data' research has important implications for obstetric anesthesia care and warrants continued study. Real-time electronic surveillance is a potentially useful application of big data technology on the labor and delivery unit.

  19. [Big data and their perspectives in radiation therapy].

    PubMed

    Guihard, Sébastien; Thariat, Juliette; Clavier, Jean-Baptiste

    2017-02-01

    The concept of big data indicates a change of scale in the use of data and data aggregation into large databases through improved computer technology. One of the current challenges in the creation of big data in the context of radiation therapy is the transformation of routine care items into dark data, i.e. data not yet collected, and the fusion of databases collecting different types of information (dose-volume histograms and toxicity data for example). Processes and infrastructures devoted to big data collection should not impact negatively on the doctor-patient relationship, the general process of care or the quality of the data collected. The use of big data requires a collective effort of physicians, physicists, software manufacturers and health authorities to create, organize and exploit big data in radiotherapy and, beyond, oncology. Big data involve a new culture to build an appropriate infrastructure legally and ethically. Processes and issues are discussed in this article. Copyright © 2016 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  20. Volume and Value of Big Healthcare Data.

    PubMed

    Dinov, Ivo D

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions.

  1. Volume and Value of Big Healthcare Data

    PubMed Central

    Dinov, Ivo D.

    2016-01-01

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions. PMID:26998309

  2. Simulation Experiments: Better Data, Not Just Big Data

    DTIC Science & Technology

    2014-12-01

    Modeling and Computer Simulation 22 (4): 20:1–20:17. Hogan, Joe 2014, June 9. “So Far, Big Data is Small Potatoes ”. Scientific American Blog Network...Available via http://blogs.scientificamerican.com/cross-check/2014/06/09/so-far- big-data-is-small- potatoes /. IBM. 2014. “Big Data at the Speed of Business

  3. Big Data Analytics Methodology in the Financial Industry

    ERIC Educational Resources Information Center

    Lawler, James; Joseph, Anthony

    2017-01-01

    Firms in industry continue to be attracted by the benefits of Big Data Analytics. The benefits of Big Data Analytics projects may not be as evident as frequently indicated in the literature. The authors of the study evaluate factors in a customized methodology that may increase the benefits of Big Data Analytics projects. Evaluating firms in the…

  4. Big data: survey, technologies, opportunities, and challenges.

    PubMed

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Ali, Waleed Kamaleldin Mahmoud; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  5. Big Data: Survey, Technologies, Opportunities, and Challenges

    PubMed Central

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Mahmoud Ali, Waleed Kamaleldin; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data. PMID:25136682

  6. Opportunity and Challenges for Migrating Big Data Analytics in Cloud

    NASA Astrophysics Data System (ADS)

    Amitkumar Manekar, S.; Pradeepini, G., Dr.

    2017-08-01

    Big Data Analytics is a big word now days. As per demanding and more scalable process data generation capabilities, data acquisition and storage become a crucial issue. Cloud storage is a majorly usable platform; the technology will become crucial to executives handling data powered by analytics. Now a day’s trend towards “big data-as-a-service” is talked everywhere. On one hand, cloud-based big data analytics exactly tackle in progress issues of scale, speed, and cost. But researchers working to solve security and other real-time problem of big data migration on cloud based platform. This article specially focused on finding possible ways to migrate big data to cloud. Technology which support coherent data migration and possibility of doing big data analytics on cloud platform is demanding in natute for new era of growth. This article also gives information about available technology and techniques for migration of big data in cloud.

  7. Curating Big Data Made Simple: Perspectives from Scientific Communities.

    PubMed

    Sowe, Sulayman K; Zettsu, Koji

    2014-03-01

    The digital universe is exponentially producing an unprecedented volume of data that has brought benefits as well as fundamental challenges for enterprises and scientific communities alike. This trend is inherently exciting for the development and deployment of cloud platforms to support scientific communities curating big data. The excitement stems from the fact that scientists can now access and extract value from the big data corpus, establish relationships between bits and pieces of information from many types of data, and collaborate with a diverse community of researchers from various domains. However, despite these perceived benefits, to date, little attention is focused on the people or communities who are both beneficiaries and, at the same time, producers of big data. The technical challenges posed by big data are as big as understanding the dynamics of communities working with big data, whether scientific or otherwise. Furthermore, the big data era also means that big data platforms for data-intensive research must be designed in such a way that research scientists can easily search and find data for their research, upload and download datasets for onsite/offsite use, perform computations and analysis, share their findings and research experience, and seamlessly collaborate with their colleagues. In this article, we present the architecture and design of a cloud platform that meets some of these requirements, and a big data curation model that describes how a community of earth and environmental scientists is using the platform to curate data. Motivation for developing the platform, lessons learnt in overcoming some challenges associated with supporting scientists to curate big data, and future research directions are also presented.

  8. Big data analytics in healthcare: promise and potential.

    PubMed

    Raghupathi, Wullianallur; Raghupathi, Viju

    2014-01-01

    To describe the promise and potential of big data analytics in healthcare. The paper describes the nascent field of big data analytics in healthcare, discusses the benefits, outlines an architectural framework and methodology, describes examples reported in the literature, briefly discusses the challenges, and offers conclusions. The paper provides a broad overview of big data analytics for healthcare researchers and practitioners. Big data analytics in healthcare is evolving into a promising field for providing insight from very large data sets and improving outcomes while reducing costs. Its potential is great; however there remain challenges to overcome.

  9. Big data are coming to psychiatry: a general introduction.

    PubMed

    Monteith, Scott; Glenn, Tasha; Geddes, John; Bauer, Michael

    2015-12-01

    Big data are coming to the study of bipolar disorder and all of psychiatry. Data are coming from providers and payers (including EMR, imaging, insurance claims and pharmacy data), from omics (genomic, proteomic, and metabolomic data), and from patients and non-providers (data from smart phone and Internet activities, sensors and monitoring tools). Analysis of the big data will provide unprecedented opportunities for exploration, descriptive observation, hypothesis generation, and prediction, and the results of big data studies will be incorporated into clinical practice. Technical challenges remain in the quality, analysis and management of big data. This paper discusses some of the fundamental opportunities and challenges of big data for psychiatry.

  10. Genotype, soil type, and locale effects on reciprocal transplant vigor, endophyte growth, and microbial functional diversity of a narrow sagebrush hybrid zone in Salt Creek Canyon, Utah

    USGS Publications Warehouse

    Miglia, K.J.; McArthur, E.D.; Redman, R.S.; Rodriguez, R.J.; Zak, J.C.; Freeman, D.C.

    2007-01-01

    When addressing the nature of ecological adaptation and environmental factors limiting population ranges and contributing to speciation, it is important to consider not only the plant's genotype and its response to the environment, but also any close interactions that it has with other organisms, specifically, symbiotic microorganisms. To investigate this, soils and seedlings were reciprocally transplanted into common gardens of the big sagebrush hybrid zone in Salt Creek Canyon, Utah, to determine location and edaphic effects on the fitness of parental and hybrid plants. Endophytic symbionts and functional microbial diversity of indigenous and transplanted soils and sagebrush plants were also examined. Strong selection occurred against the parental genotypes in the middle hybrid zone garden in middle hybrid zone soil; F1 hybrids had the highest fitness under these conditions. Neither of the parental genotypes had superior fitness in their indigenous soils and habitats; rather F1 hybrids with the nonindigenous maternal parent were superiorly fit. Significant garden-by-soil type interactions indicate adaptation of both plant and soil microorganisms to their indigenous soils and habitats, most notably in the middle hybrid zone garden in middle hybrid zone soil. Contrasting performances of F1 hybrids suggest asymmetrical gene flow with mountain, rather than basin, big sagebrush acting as the maternal parent. We showed that the microbial community impacted the performance of parental and hybrid plants in different soils, likely limiting the ranges of the different genotypes.

  11. True Randomness from Big Data.

    PubMed

    Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang

    2016-09-26

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  12. True Randomness from Big Data

    NASA Astrophysics Data System (ADS)

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-09-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  13. True Randomness from Big Data

    PubMed Central

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-01-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514

  14. Big Data, Big Problems: Incorporating Mission, Values, and Culture in Provider Affiliations.

    PubMed

    Shaha, Steven H; Sayeed, Zain; Anoushiravani, Afshin A; El-Othmani, Mouhanad M; Saleh, Khaled J

    2016-10-01

    This article explores how integration of data from clinical registries and electronic health records produces a quality impact within orthopedic practices. Data are differentiated from information, and several types of data that are collected and used in orthopedic outcome measurement are defined. Furthermore, the concept of comparative effectiveness and its impact on orthopedic clinical research are assessed. This article places emphasis on how the concept of big data produces health care challenges balanced with benefits that may be faced by patients and orthopedic surgeons. Finally, essential characteristics of an electronic health record that interlinks musculoskeletal care and big data initiatives are reviewed. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. AirMSPI PODEX BigSur Terrain Images

    Atmospheric Science Data Center

    2013-12-13

    ... Browse Images from the PODEX 2013 Campaign   Big Sur target (Big Sur, California) 02/03/2013 Terrain-projected   Select ...   Version number   For more information, see the Data Product Specifications (DPS)   ...

  16. A New Look at Big History

    ERIC Educational Resources Information Center

    Hawkey, Kate

    2014-01-01

    The article sets out a "big history" which resonates with the priorities of our own time. A globalizing world calls for new spacial scales to underpin what the history curriculum addresses, "big history" calls for new temporal scales, while concern over climate change calls for a new look at subject boundaries. The article…

  17. West Virginia's big trees: setting the record straight

    Treesearch

    Melissa Thomas-Van Gundy; Robert Whetsell

    2016-01-01

    People love big trees, people love to find big trees, and people love to find big trees in the place they call home. Having been suspicious for years, my coauthor and historian Rob Whetsell, approached me with a species identification challenge. There are several photographs of giant trees used by many people to illustrate the past forests of West Virginia,...

  18. 77 FR 49779 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-17

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... Big Horn County Weed and Pest Building, 4782 Highway 310, Greybull, Wyoming. Written comments about...

  19. 75 FR 71069 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... held at the Big Horn County Weed and Pest Building, 4782 Highway 310, Greybull, Wyoming. Written...

  20. Big Bang Day : The Great Big Particle Adventure - 3. Origins

    ScienceCinema

    None

    2017-12-09

    In this series, comedian and physicist Ben Miller asks the CERN scientists what they hope to find. If the LHC is successful, it will explain the nature of the Universe around us in terms of a few simple ingredients and a few simple rules. But the Universe now was forged in a Big Bang where conditions were very different, and the rules were very different, and those early moments were crucial to determining how things turned out later. At the LHC they can recreate conditions as they were billionths of a second after the Big Bang, before atoms and nuclei existed. They can find out why matter and antimatter didn't mutually annihilate each other to leave behind a Universe of pure, brilliant light. And they can look into the very structure of space and time - the fabric of the Universe

  1. Profiling optimization for big data transfer over dedicated channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, D.; Wu, Qishi; Rao, Nageswara S

    The transfer of big data is increasingly supported by dedicated channels in high-performance networks, where transport protocols play an important role in maximizing applicationlevel throughput and link utilization. The performance of transport protocols largely depend on their control parameter settings, but it is prohibitively time consuming to conduct an exhaustive search in a large parameter space to find the best set of parameter values. We propose FastProf, a stochastic approximation-based transport profiler, to quickly determine the optimal operational zone of a given data transfer protocol/method over dedicated channels. We implement and test the proposed method using both emulations based onmore » real-life performance measurements and experiments over physical connections with short (2 ms) and long (380 ms) delays. Both the emulation and experimental results show that FastProf significantly reduces the profiling overhead while achieving a comparable level of end-to-end throughput performance with the exhaustive search-based approach.« less

  2. Structuring the Curriculum around Big Ideas

    ERIC Educational Resources Information Center

    Alleman, Janet; Knighton, Barbara; Brophy, Jere

    2010-01-01

    This article provides an inside look at Barbara Knighton's classroom teaching. She uses big ideas to guide her planning and instruction and gives other teachers suggestions for adopting the big idea approach and ways for making the approach easier. This article also represents a "small slice" of a dozen years of collaborative research,…

  3. Toward a manifesto for the 'public understanding of big data'.

    PubMed

    Michael, Mike; Lupton, Deborah

    2016-01-01

    In this article, we sketch a 'manifesto' for the 'public understanding of big data'. On the one hand, this entails such public understanding of science and public engagement with science and technology-tinged questions as follows: How, when and where are people exposed to, or do they engage with, big data? Who are regarded as big data's trustworthy sources, or credible commentators and critics? What are the mechanisms by which big data systems are opened to public scrutiny? On the other hand, big data generate many challenges for public understanding of science and public engagement with science and technology: How do we address publics that are simultaneously the informant, the informed and the information of big data? What counts as understanding of, or engagement with, big data, when big data themselves are multiplying, fluid and recursive? As part of our manifesto, we propose a range of empirical, conceptual and methodological exhortations. We also provide Appendix 1 that outlines three novel methods for addressing some of the issues raised in the article. © The Author(s) 2015.

  4. Big Data and SME financing in China

    NASA Astrophysics Data System (ADS)

    Tian, Z.; Hassan, A. F. S.; Razak, N. H. A.

    2018-05-01

    Big Data is becoming more and more prevalent in recent years, and it attracts lots of attention from various perspectives of the world such as academia, industry, and even government. Big Data can be seen as the next-generation source of power for the economy. Today, Big Data represents a new way to approach information and help all industry and business fields. The Chinese financial market has long been dominated by state-owned banks; however, these banks provide low-efficiency help toward small- and medium-sized enterprises (SMEs) and private businesses. The development of Big Data is changing the financial market, with more and more financial products and services provided by Internet companies in China. The credit rating models and borrower identification make online financial services more efficient than conventional banks. These services also challenge the domination of state-owned banks.

  5. An embedding for the big bang

    NASA Technical Reports Server (NTRS)

    Wesson, Paul S.

    1994-01-01

    A cosmological model is given that has good physical properties for the early and late universe but is a hypersurface in a flat five-dimensional manifold. The big bang can therefore be regarded as an effect of a choice of coordinates in a truncated higher-dimensional geometry. Thus the big bang is in some sense a geometrical illusion.

  6. 76 FR 26240 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-06

    ... words Big Horn County RAC in the subject line. Facsimilies may be sent to 307-674-2668. All comments... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee...

  7. Commentary: Epidemiology in the era of big data.

    PubMed

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-05-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called "three V's": variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field's future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future.

  8. Big Data and the Future of Radiology Informatics.

    PubMed

    Kansagra, Akash P; Yu, John-Paul J; Chatterjee, Arindam R; Lenchik, Leon; Chow, Daniel S; Prater, Adam B; Yeh, Jean; Doshi, Ankur M; Hawkins, C Matthew; Heilbrun, Marta E; Smith, Stacy E; Oselkin, Martin; Gupta, Pushpender; Ali, Sayed

    2016-01-01

    Rapid growth in the amount of data that is electronically recorded as part of routine clinical operations has generated great interest in the use of Big Data methodologies to address clinical and research questions. These methods can efficiently analyze and deliver insights from high-volume, high-variety, and high-growth rate datasets generated across the continuum of care, thereby forgoing the time, cost, and effort of more focused and controlled hypothesis-driven research. By virtue of an existing robust information technology infrastructure and years of archived digital data, radiology departments are particularly well positioned to take advantage of emerging Big Data techniques. In this review, we describe four areas in which Big Data is poised to have an immediate impact on radiology practice, research, and operations. In addition, we provide an overview of the Big Data adoption cycle and describe how academic radiology departments can promote Big Data development. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  9. Big Soda Lake (Nevada). 1. Pelagic bacterial heterotrophy and biomass

    USGS Publications Warehouse

    Zehr, Jon P.; Harvey, Ronald W.; Oremland, Ronald S.; Cloern, James E.; George, Leah H.; Lane, Judith L.

    1987-01-01

    Bacterial activities and abundance were measured seasonally in the water column of meromictic Big Soda Lake which is divided into three chemically distinct zones: aerobic mixolimnion, anaerobic mixolimnion, and anaerobic monimolimnion. Bacterial abundance ranged between 5 and 52 x 106 cells ml−1, with highest biomass at the interfaces between these zones: 2–4 mg C liter−1 in the photosynthetic bacterial layer (oxycline) and 0.8–2.0 mg C liter−1 in the chemocline. Bacterial cell size and morphology also varied with depth: small coccoid cells were dominant in the aerobic mixolimnion, whereas the monimolimnion had a more diverse population that included cocci, rods, and large filaments. Heterotrophic activity was measured by [methyl-3H]thymidine incorporation and [14C]glutamate uptake. Highest uptake rates were at or just below the photosynthetic bacterial layer and were attributable to small (<1 µm) heterotrophs rather than the larger photosynthetic bacteria. These high rates of heterotrophic uptake were apparently linked with fermentation; rates of other mineralization processes (e.g. sulfate reduction, methanogenesis, denitrification) in the anoxic mixolimnion were insignificant. Heterotrophic activity in the highly reduced monimolimnion was generally much lower than elsewhere in the water column. Therefore, although the monimolimnion contained most of the bacterial abundance and biomass (∼60%), most of the cells there were inactive.

  10. Natural regeneration processes in big sagebrush (Artemisia tridentata)

    USGS Publications Warehouse

    Schlaepfer, Daniel R.; Lauenroth, William K.; Bradford, John B.

    2014-01-01

    Big sagebrush, Artemisia tridentata Nuttall (Asteraceae), is the dominant plant species of large portions of semiarid western North America. However, much of historical big sagebrush vegetation has been removed or modified. Thus, regeneration is recognized as an important component for land management. Limited knowledge about key regeneration processes, however, represents an obstacle to identifying successful management practices and to gaining greater insight into the consequences of increasing disturbance frequency and global change. Therefore, our objective is to synthesize knowledge about natural big sagebrush regeneration. We identified and characterized the controls of big sagebrush seed production, germination, and establishment. The largest knowledge gaps and associated research needs include quiescence and dormancy of embryos and seedlings; variation in seed production and germination percentages; wet-thermal time model of germination; responses to frost events (including freezing/thawing of soils), CO2 concentration, and nutrients in combination with water availability; suitability of microsite vs. site conditions; competitive ability as well as seedling growth responses; and differences among subspecies and ecoregions. Potential impacts of climate change on big sagebrush regeneration could include that temperature increases may not have a large direct influence on regeneration due to the broad temperature optimum for regeneration, whereas indirect effects could include selection for populations with less stringent seed dormancy. Drier conditions will have direct negative effects on germination and seedling survival and could also lead to lighter seeds, which lowers germination success further. The short seed dispersal distance of big sagebrush may limit its tracking of suitable climate; whereas, the low competitive ability of big sagebrush seedlings may limit successful competition with species that track climate. An improved understanding of the

  11. Big Data Provenance: Challenges, State of the Art and Opportunities.

    PubMed

    Wang, Jianwu; Crawl, Daniel; Purawat, Shweta; Nguyen, Mai; Altintas, Ilkay

    2015-01-01

    Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data.

  12. Big Data Provenance: Challenges, State of the Art and Opportunities

    PubMed Central

    Wang, Jianwu; Crawl, Daniel; Purawat, Shweta; Nguyen, Mai; Altintas, Ilkay

    2017-01-01

    Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data. PMID:29399671

  13. 1976 Big Thompson flood, Colorado

    USGS Publications Warehouse

    Jarrett, R. D.; Vandas, S.J.

    2006-01-01

    In the early evening of July 31, 1976, a large stationary thunderstorm released as much as 7.5 inches of rainfall in about an hour (about 12 inches in a few hours) in the upper reaches of the Big Thompson River drainage. This large amount of rainfall in such a short period of time produced a flash flood that caught residents and tourists by surprise. The immense volume of water that churned down the narrow Big Thompson Canyon scoured the river channel and destroyed everything in its path, including 418 homes, 52 businesses, numerous bridges, paved and unpaved roads, power and telephone lines, and many other structures. The tragedy claimed the lives of 144 people. Scores of other people narrowly escaped with their lives. The Big Thompson flood ranks among the deadliest of Colorado's recorded floods. It is one of several destructive floods in the United States that has shown the necessity of conducting research to determine the causes and effects of floods. The U.S. Geological Survey (USGS) conducts research and operates a Nationwide streamgage network to help understand and predict the magnitude and likelihood of large streamflow events such as the Big Thompson Flood. Such research and streamgage information are part of an ongoing USGS effort to reduce flood hazards and to increase public awareness.

  14. [Embracing medical innovation in the era of big data].

    PubMed

    You, Suning

    2015-01-01

    Along with the advent of big data era worldwide, medical field has to place itself in it inevitably. The current article thoroughly introduces the basic knowledge of big data, and points out the coexistence of its advantages and disadvantages. Although the innovations in medical field are struggling, the current medical pattern will be changed fundamentally by big data. The article also shows quick change of relevant analysis in big data era, depicts a good intention of digital medical, and proposes some wise advices to surgeons.

  15. Application and Exploration of Big Data Mining in Clinical Medicine.

    PubMed

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-03-20

    To review theories and technologies of big data mining and their application in clinical medicine. Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster-Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Big data mining has the potential to play an important role in clinical medicine.

  16. Big Data in Public Health: Terminology, Machine Learning, and Privacy.

    PubMed

    Mooney, Stephen J; Pejaver, Vikas

    2018-04-01

    The digital world is generating data at a staggering and still increasing rate. While these "big data" have unlocked novel opportunities to understand public health, they hold still greater potential for research and practice. This review explores several key issues that have arisen around big data. First, we propose a taxonomy of sources of big data to clarify terminology and identify threads common across some subtypes of big data. Next, we consider common public health research and practice uses for big data, including surveillance, hypothesis-generating research, and causal inference, while exploring the role that machine learning may play in each use. We then consider the ethical implications of the big data revolution with particular emphasis on maintaining appropriate care for privacy in a world in which technology is rapidly changing social norms regarding the need for (and even the meaning of) privacy. Finally, we make suggestions regarding structuring teams and training to succeed in working with big data in research and practice.

  17. Big data analytics to improve cardiovascular care: promise and challenges.

    PubMed

    Rumsfeld, John S; Joynt, Karen E; Maddox, Thomas M

    2016-06-01

    The potential for big data analytics to improve cardiovascular quality of care and patient outcomes is tremendous. However, the application of big data in health care is at a nascent stage, and the evidence to date demonstrating that big data analytics will improve care and outcomes is scant. This Review provides an overview of the data sources and methods that comprise big data analytics, and describes eight areas of application of big data analytics to improve cardiovascular care, including predictive modelling for risk and resource use, population management, drug and medical device safety surveillance, disease and treatment heterogeneity, precision medicine and clinical decision support, quality of care and performance measurement, and public health and research applications. We also delineate the important challenges for big data applications in cardiovascular care, including the need for evidence of effectiveness and safety, the methodological issues such as data quality and validation, and the critical importance of clinical integration and proof of clinical utility. If big data analytics are shown to improve quality of care and patient outcomes, and can be successfully implemented in cardiovascular practice, big data will fulfil its potential as an important component of a learning health-care system.

  18. A proposed framework of big data readiness in public sectors

    NASA Astrophysics Data System (ADS)

    Ali, Raja Haslinda Raja Mohd; Mohamad, Rosli; Sudin, Suhizaz

    2016-08-01

    Growing interest over big data mainly linked to its great potential to unveil unforeseen pattern or profiles that support organisation's key business decisions. Following private sector moves to embrace big data, the government sector has now getting into the bandwagon. Big data has been considered as one of the potential tools to enhance service delivery of the public sector within its financial resources constraints. Malaysian government, particularly, has considered big data as one of the main national agenda. Regardless of government commitment to promote big data amongst government agencies, degrees of readiness of the government agencies as well as their employees are crucial in ensuring successful deployment of big data. This paper, therefore, proposes a conceptual framework to investigate perceived readiness of big data potentials amongst Malaysian government agencies. Perceived readiness of 28 ministries and their respective employees will be assessed using both qualitative (interview) and quantitative (survey) approaches. The outcome of the study is expected to offer meaningful insight on factors affecting change readiness among public agencies on big data potentials and the expected outcome from greater/lower change readiness among the public sectors.

  19. Big Bang Day : The Great Big Particle Adventure - 3. Origins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    In this series, comedian and physicist Ben Miller asks the CERN scientists what they hope to find. If the LHC is successful, it will explain the nature of the Universe around us in terms of a few simple ingredients and a few simple rules. But the Universe now was forged in a Big Bang where conditions were very different, and the rules were very different, and those early moments were crucial to determining how things turned out later. At the LHC they can recreate conditions as they were billionths of a second after the Big Bang, before atoms and nucleimore » existed. They can find out why matter and antimatter didn't mutually annihilate each other to leave behind a Universe of pure, brilliant light. And they can look into the very structure of space and time - the fabric of the Universe« less

  20. 78 FR 33326 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-04

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... will be held July 15, 2013 at 3:00 p.m. ADDRESSES: The meeting will be held at Big Horn County Weed and...

  1. 76 FR 7810 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-11

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... will be held on March 3, 2011, and will begin at 10 a.m. ADDRESSES: The meeting will be held at the Big...

  2. In Search of the Big Bubble

    ERIC Educational Resources Information Center

    Simoson, Andrew; Wentzky, Bethany

    2011-01-01

    Freely rising air bubbles in water sometimes assume the shape of a spherical cap, a shape also known as the "big bubble". Is it possible to find some objective function involving a combination of a bubble's attributes for which the big bubble is the optimal shape? Following the basic idea of the definite integral, we define a bubble's surface as…

  3. Concurrence of big data analytics and healthcare: A systematic review.

    PubMed

    Mehta, Nishita; Pandit, Anil

    2018-06-01

    The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of

  4. Big Data Analytics in Healthcare

    PubMed Central

    Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S. M. Reza; Beard, Daniel A.

    2015-01-01

    The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined. PMID:26229957

  5. Big Data Analytics in Healthcare.

    PubMed

    Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S M Reza; Navidi, Fatemeh; Beard, Daniel A; Najarian, Kayvan

    2015-01-01

    The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.

  6. Mountain big sagebrush (Artemisia tridentata spp vaseyana) seed production

    Treesearch

    Melissa L. Landeen

    2015-01-01

    Big sagebrush (Artemisia tridentata Nutt.) is the most widespread and common shrub in the sagebrush biome of western North America. Of the three most common subspecies of big sagebrush (Artemisia tridentata), mountain big sagebrush (ssp. vaseyana; MBS) is the most resilient to disturbance, but still requires favorable climactic conditions and a viable post-...

  7. New Evidence on the Development of the Word "Big."

    ERIC Educational Resources Information Center

    Sena, Rhonda; Smith, Linda B.

    1990-01-01

    Results indicate that curvilinear trend in children's understanding of word "big" is not obtained in all stimulus contexts. This suggests that meaning and use of "big" is complex, and may not refer simply to larger objects in a set. Proposes that meaning of "big" constitutes a dynamic system driven by many perceptual,…

  8. Investigating Seed Longevity of Big Sagebrush (Artemisia tridentata)

    USGS Publications Warehouse

    Wijayratne, Upekala C.; Pyke, David A.

    2009-01-01

    The Intermountain West is dominated by big sagebrush communities (Artemisia tridentata subspecies) that provide habitat and forage for wildlife, prevent erosion, and are economically important to recreation and livestock industries. The two most prominent subspecies of big sagebrush in this region are Wyoming big sagebrush (A. t. ssp. wyomingensis) and mountain big sagebrush (A. t. ssp. vaseyana). Increased understanding of seed bank dynamics will assist with sustainable management and persistence of sagebrush communities. For example, mountain big sagebrush may be subjected to shorter fire return intervals and prescribed fire is a tool used often to rejuvenate stands and reduce tree (Juniperus sp. or Pinus sp.) encroachment into these communities. A persistent seed bank for mountain big sagebrush would be advantageous under these circumstances. Laboratory germination trials indicate that seed dormancy in big sagebrush may be habitat-specific, with collections from colder sites being more dormant. Our objective was to investigate seed longevity of both subspecies by evaluating viability of seeds in the field with a seed retrieval experiment and sampling for seeds in situ. We chose six study sites for each subspecies. These sites were dispersed across eastern Oregon, southern Idaho, northwestern Utah, and eastern Nevada. Ninety-six polyester mesh bags, each containing 100 seeds of a subspecies, were placed at each site during November 2006. Seed bags were placed in three locations: (1) at the soil surface above litter, (2) on the soil surface beneath litter, and (3) 3 cm below the soil surface to determine whether dormancy is affected by continued darkness or environmental conditions. Subsets of seeds were examined in April and November in both 2007 and 2008 to determine seed viability dynamics. Seed bank samples were taken at each site, separated into litter and soil fractions, and assessed for number of germinable seeds in a greenhouse. Community composition data

  9. Smart Information Management in Health Big Data.

    PubMed

    Muteba A, Eustache

    2017-01-01

    The smart information management system (SIMS) is concerned with the organization of anonymous patient records in a big data and their extraction in order to provide needful real-time intelligence. The purpose of the present study is to highlight the design and the implementation of the smart information management system. We emphasis, in one hand, the organization of a big data in flat file in simulation of nosql database, and in the other hand, the extraction of information based on lookup table and cache mechanism. The SIMS in the health big data aims the identification of new therapies and approaches to delivering care.

  10. Integrative methods for analyzing big data in precision medicine.

    PubMed

    Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša

    2016-03-01

    We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Big Dreams

    ERIC Educational Resources Information Center

    Benson, Michael T.

    2015-01-01

    The Keen Johnson Building is symbolic of Eastern Kentucky University's historic role as a School of Opportunity. It is a place that has inspired generations of students, many from disadvantaged backgrounds, to dream big dreams. The construction of the Keen Johnson Building was inspired by a desire to create a student union facility that would not…

  12. Translating Big Data into Smart Data for Veterinary Epidemiology.

    PubMed

    VanderWaal, Kimberly; Morrison, Robert B; Neuhauser, Claudia; Vilalta, Carles; Perez, Andres M

    2017-01-01

    The increasing availability and complexity of data has led to new opportunities and challenges in veterinary epidemiology around how to translate abundant, diverse, and rapidly growing "big" data into meaningful insights for animal health. Big data analytics are used to understand health risks and minimize the impact of adverse animal health issues through identifying high-risk populations, combining data or processes acting at multiple scales through epidemiological modeling approaches, and harnessing high velocity data to monitor animal health trends and detect emerging health threats. The advent of big data requires the incorporation of new skills into veterinary epidemiology training, including, for example, machine learning and coding, to prepare a new generation of scientists and practitioners to engage with big data. Establishing pipelines to analyze big data in near real-time is the next step for progressing from simply having "big data" to create "smart data," with the objective of improving understanding of health risks, effectiveness of management and policy decisions, and ultimately preventing or at least minimizing the impact of adverse animal health issues.

  13. Machine learning for Big Data analytics in plants.

    PubMed

    Ma, Chuang; Zhang, Hao Helen; Wang, Xiangfeng

    2014-12-01

    Rapid advances in high-throughput genomic technology have enabled biology to enter the era of 'Big Data' (large datasets). The plant science community not only needs to build its own Big-Data-compatible parallel computing and data management infrastructures, but also to seek novel analytical paradigms to extract information from the overwhelming amounts of data. Machine learning offers promising computational and analytical solutions for the integrative analysis of large, heterogeneous and unstructured datasets on the Big-Data scale, and is gradually gaining popularity in biology. This review introduces the basic concepts and procedures of machine-learning applications and envisages how machine learning could interface with Big Data technology to facilitate basic research and biotechnology in the plant sciences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. The application of ANN for zone identification in a complex reservoir

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, A.C.; Molnar, D.; Aminian, K.

    1995-12-31

    Reservoir characterization plays a critical role in appraising the economic success of reservoir management and development methods. Nearly all reservoirs show some degree of heterogeneity, which invariably impacts production. As a result, the production performance of a complex reservoir cannot be realistically predicted without accurate reservoir description. Characterization of a heterogeneous reservoir is a complex problem. The difficulty stems from the fact that sufficient data to accurately predict the distribution of the formation attributes are not usually available. Generally the geophysical logs are available from a considerable number of wells in the reservoir. Therefore, a methodology for reservoir description andmore » characterization utilizing only well logs data represents a significant technical as well as economic advantage. One of the key issues in the description and characterization of heterogeneous formations is the distribution of various zones and their properties. In this study, several artificial neural networks (ANN) were successfully designed and developed for zone identification in a heterogeneous formation from geophysical well logs. Granny Creek Field in West Virginia has been selected as the study area in this paper. This field has produced oil from Big Injun Formation since the early 1900`s. The water flooding operations were initiated in the 1970`s and are currently still in progress. Well log data on a substantial number of wells in this reservoir were available and were collected. Core analysis results were also available from a few wells. The log data from 3 wells along with the various zone definitions were utilized to train the networks for zone recognition. The data from 2 other wells with previously determined zones, based on the core and log data, were then utilized to verify the developed networks predictions. The results indicated that ANN can be a useful tool for accurately identifying the zones in complex reservoirs.« less

  15. Quality of Big Data in Healthcare

    DOE PAGES

    Sukumar, Sreenivas R.; Ramachandran, Natarajan; Ferrell, Regina Kay

    2015-01-01

    The current trend in Big Data Analytics and in particular Health information technology is towards building sophisticated models, methods and tools for business, operational and clinical intelligence, but the critical issue of data quality required for these models is not getting the attention it deserves. The objective of the paper is to highlight the issues of data quality in the context of Big Data Healthcare Analytics.

  16. Quality of Big Data in Healthcare

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R.; Ramachandran, Natarajan; Ferrell, Regina Kay

    The current trend in Big Data Analytics and in particular Health information technology is towards building sophisticated models, methods and tools for business, operational and clinical intelligence, but the critical issue of data quality required for these models is not getting the attention it deserves. The objective of the paper is to highlight the issues of data quality in the context of Big Data Healthcare Analytics.

  17. Database Resources of the BIG Data Center in 2018.

    PubMed

    2018-01-04

    The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Database Resources of the BIG Data Center in 2018

    PubMed Central

    Xu, Xingjian; Hao, Lili; Zhu, Junwei; Tang, Bixia; Zhou, Qing; Song, Fuhai; Chen, Tingting; Zhang, Sisi; Dong, Lili; Lan, Li; Wang, Yanqing; Sang, Jian; Hao, Lili; Liang, Fang; Cao, Jiabao; Liu, Fang; Liu, Lin; Wang, Fan; Ma, Yingke; Xu, Xingjian; Zhang, Lijuan; Chen, Meili; Tian, Dongmei; Li, Cuiping; Dong, Lili; Du, Zhenglin; Yuan, Na; Zeng, Jingyao; Zhang, Zhewen; Wang, Jinyue; Shi, Shuo; Zhang, Yadong; Pan, Mengyu; Tang, Bixia; Zou, Dong; Song, Shuhui; Sang, Jian; Xia, Lin; Wang, Zhennan; Li, Man; Cao, Jiabao; Niu, Guangyi; Zhang, Yang; Sheng, Xin; Lu, Mingming; Wang, Qi; Xiao, Jingfa; Zou, Dong; Wang, Fan; Hao, Lili; Liang, Fang; Li, Mengwei; Sun, Shixiang; Zou, Dong; Li, Rujiao; Yu, Chunlei; Wang, Guangyu; Sang, Jian; Liu, Lin; Li, Mengwei; Li, Man; Niu, Guangyi; Cao, Jiabao; Sun, Shixiang; Xia, Lin; Yin, Hongyan; Zou, Dong; Xu, Xingjian; Ma, Lina; Chen, Huanxin; Sun, Yubin; Yu, Lei; Zhai, Shuang; Sun, Mingyuan; Zhang, Zhang; Zhao, Wenming; Xiao, Jingfa; Bao, Yiming; Song, Shuhui; Hao, Lili; Li, Rujiao; Ma, Lina; Sang, Jian; Wang, Yanqing; Tang, Bixia; Zou, Dong; Wang, Fan

    2018-01-01

    Abstract The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. PMID:29036542

  19. The BIG Data Center: from deposition to integration to translation

    PubMed Central

    2017-01-01

    Biological data are generated at unprecedentedly exponential rates, posing considerable challenges in big data deposition, integration and translation. The BIG Data Center, established at Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, provides a suite of database resources, including (i) Genome Sequence Archive, a data repository specialized for archiving raw sequence reads, (ii) Gene Expression Nebulas, a data portal of gene expression profiles based entirely on RNA-Seq data, (iii) Genome Variation Map, a comprehensive collection of genome variations for featured species, (iv) Genome Warehouse, a centralized resource housing genome-scale data with particular focus on economically important animals and plants, (v) Methylation Bank, an integrated database of whole-genome single-base resolution methylomes and (vi) Science Wikis, a central access point for biological wikis developed for community annotations. The BIG Data Center is dedicated to constructing and maintaining biological databases through big data integration and value-added curation, conducting basic research to translate big data into big knowledge and providing freely open access to a variety of data resources in support of worldwide research activities in both academia and industry. All of these resources are publicly available and can be found at http://bigd.big.ac.cn. PMID:27899658

  20. The BIG Data Center: from deposition to integration to translation.

    PubMed

    2017-01-04

    Biological data are generated at unprecedentedly exponential rates, posing considerable challenges in big data deposition, integration and translation. The BIG Data Center, established at Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, provides a suite of database resources, including (i) Genome Sequence Archive, a data repository specialized for archiving raw sequence reads, (ii) Gene Expression Nebulas, a data portal of gene expression profiles based entirely on RNA-Seq data, (iii) Genome Variation Map, a comprehensive collection of genome variations for featured species, (iv) Genome Warehouse, a centralized resource housing genome-scale data with particular focus on economically important animals and plants, (v) Methylation Bank, an integrated database of whole-genome single-base resolution methylomes and (vi) Science Wikis, a central access point for biological wikis developed for community annotations. The BIG Data Center is dedicated to constructing and maintaining biological databases through big data integration and value-added curation, conducting basic research to translate big data into big knowledge and providing freely open access to a variety of data resources in support of worldwide research activities in both academia and industry. All of these resources are publicly available and can be found at http://bigd.big.ac.cn. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Application and Exploration of Big Data Mining in Clinical Medicine

    PubMed Central

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-01-01

    Objective: To review theories and technologies of big data mining and their application in clinical medicine. Data Sources: Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Study Selection: Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. Results: This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster–Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Conclusion: Big data mining has the potential to play an important role in clinical medicine. PMID:26960378

  2. Rethinking big data: A review on the data quality and usage issues

    NASA Astrophysics Data System (ADS)

    Liu, Jianzheng; Li, Jie; Li, Weifeng; Wu, Jiansheng

    2016-05-01

    The recent explosive publications of big data studies have well documented the rise of big data and its ongoing prevalence. Different types of ;big data; have emerged and have greatly enriched spatial information sciences and related fields in terms of breadth and granularity. Studies that were difficult to conduct in the past time due to data availability can now be carried out. However, big data brings lots of ;big errors; in data quality and data usage, which cannot be used as a substitute for sound research design and solid theories. We indicated and summarized the problems faced by current big data studies with regard to data collection, processing and analysis: inauthentic data collection, information incompleteness and noise of big data, unrepresentativeness, consistency and reliability, and ethical issues. Cases of empirical studies are provided as evidences for each problem. We propose that big data research should closely follow good scientific practice to provide reliable and scientific ;stories;, as well as explore and develop techniques and methods to mitigate or rectify those 'big-errors' brought by big data.

  3. Processing Solutions for Big Data in Astronomy

    NASA Astrophysics Data System (ADS)

    Fillatre, L.; Lepiller, D.

    2016-09-01

    This paper gives a simple introduction to processing solutions applied to massive amounts of data. It proposes a general presentation of the Big Data paradigm. The Hadoop framework, which is considered as the pioneering processing solution for Big Data, is described together with YARN, the integrated Hadoop tool for resource allocation. This paper also presents the main tools for the management of both the storage (NoSQL solutions) and computing capacities (MapReduce parallel processing schema) of a cluster of machines. Finally, more recent processing solutions like Spark are discussed. Big Data frameworks are now able to run complex applications while keeping the programming simple and greatly improving the computing speed.

  4. "Small Steps, Big Rewards": Preventing Type 2 Diabetes

    MedlinePlus

    ... please turn Javascript on. Feature: Diabetes "Small Steps, Big Rewards": Preventing Type 2 Diabetes Past Issues / Fall ... These are the plain facts in "Small Steps. Big Rewards: Prevent Type 2 Diabetes," an education campaign ...

  5. Big Bend National Park

    NASA Image and Video Library

    2017-12-08

    Alternately known as a geologist’s paradise and a geologist’s nightmare, Big Bend National Park in southwestern Texas offers a multitude of rock formations. Sparse vegetation makes finding and observing the rocks easy, but they document a complicated geologic history extending back 500 million years. On May 10, 2002, the Enhanced Thematic Mapper Plus on NASA’s Landsat 7 satellite captured this natural-color image of Big Bend National Park. A black line delineates the park perimeter. The arid landscape appears in muted earth tones, some of the darkest hues associated with volcanic structures, especially the Rosillos and Chisos Mountains. Despite its bone-dry appearance, Big Bend National Park is home to some 1,200 plant species, and hosts more kinds of cacti, birds, and bats than any other U.S. national park. Read more: go.nasa.gov/2bzGaZU Credit: NASA/Landsat7 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  6. Semantic Web technologies for the big data in life sciences.

    PubMed

    Wu, Hongyan; Yamaguchi, Atsuko

    2014-08-01

    The life sciences field is entering an era of big data with the breakthroughs of science and technology. More and more big data-related projects and activities are being performed in the world. Life sciences data generated by new technologies are continuing to grow in not only size but also variety and complexity, with great speed. To ensure that big data has a major influence in the life sciences, comprehensive data analysis across multiple data sources and even across disciplines is indispensable. The increasing volume of data and the heterogeneous, complex varieties of data are two principal issues mainly discussed in life science informatics. The ever-evolving next-generation Web, characterized as the Semantic Web, is an extension of the current Web, aiming to provide information for not only humans but also computers to semantically process large-scale data. The paper presents a survey of big data in life sciences, big data related projects and Semantic Web technologies. The paper introduces the main Semantic Web technologies and their current situation, and provides a detailed analysis of how Semantic Web technologies address the heterogeneous variety of life sciences big data. The paper helps to understand the role of Semantic Web technologies in the big data era and how they provide a promising solution for the big data in life sciences.

  7. Big data analytics to aid developing livable communities.

    DOT National Transportation Integrated Search

    2015-12-31

    In transportation, ubiquitous deployment of low-cost sensors combined with powerful : computer hardware and high-speed network makes big data available. USDOT defines big : data research in transportation as a number of advanced techniques applied to...

  8. Ontogeny of Big endothelin-1 effects in newborn piglet pulmonary vasculature.

    PubMed

    Liben, S; Stewart, D J; De Marte, J; Perreault, T

    1993-07-01

    Endothelin-1 (ET-1), a 21-amino acid peptide produced by endothelial cells, results from the cleavage of preproendothelin, generating Big ET-1, which is then cleaved by the ET-converting enzyme (ECE) to form ET-1. Big ET-1, like ET-1, is released by endothelial cells. Big ET-1 is equipotent to ET-1 in vivo, whereas its vasoactive effects are less in vitro. It has been suggested that the effects of Big ET-1 depend on its conversion to ET-1. ET-1 has potent vasoactive effects in the newborn pig pulmonary circulation, however, the effects of Big ET-1 remain unknown. Therefore, we studied the effects of Big ET-1 in isolated perfused lungs from 1- and 7-day-old piglets using the ECE inhibitor, phosphoramidon, and the ETA receptor antagonist, BQ-123Na. The rate of conversion of Big ET-1 to ET-1 was measured using radioimmunoassay. ET-1 (10(-13) to 10(-8) M) produced an initial vasodilation, followed by a dose-dependent potent vasoconstriction (P < 0.001), which was equal at both ages. Big ET-1 (10(-11) to 10(-8) M) also produced a dose-dependent vasoconstriction (P < 0.001). The constrictor effects of Big ET-1 and ET-1 were similar in the 1-day-old, whereas in the 7-day-old, the constrictor effect of Big ET-1 was less than that of ET-1 (P < 0.017).(ABSTRACT TRUNCATED AT 250 WORDS)

  9. Infrastructure for Big Data in the Intensive Care Unit.

    PubMed

    Zelechower, Javier; Astudillo, José; Traversaro, Francisco; Redelico, Francisco; Luna, Daniel; Quiros, Fernan; San Roman, Eduardo; Risk, Marcelo

    2017-01-01

    The Big Data paradigm can be applied in intensive care unit, in order to improve the treatment of the patients, with the aim of customized decisions. This poster is about the infrastructure necessary to built a Big Data system for the ICU. Together with the infrastructure, the conformation of a multidisciplinary team is essential to develop Big Data to use in critical care medicine.

  10. 76 FR 7837 - Big Rivers Electric Corporation; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-11

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. NJ11-11-000] Big Rivers Electric Corporation; Notice of Filing Take notice that on February 4, 2011, Big Rivers Electric Corporation (Big Rivers) filed a notice of cancellation of its Second Revised and Restated Open Access...

  11. Data management by using R: big data clinical research series.

    PubMed

    Zhang, Zhongheng

    2015-11-01

    Electronic medical record (EMR) system has been widely used in clinical practice. Instead of traditional record system by hand writing and recording, the EMR makes big data clinical research feasible. The most important feature of big data research is its real-world setting. Furthermore, big data research can provide all aspects of information related to healthcare. However, big data research requires some skills on data management, which however, is always lacking in the curriculum of medical education. This greatly hinders doctors from testing their clinical hypothesis by using EMR. To make ends meet, a series of articles introducing data management techniques are put forward to guide clinicians to big data clinical research. The present educational article firstly introduces some basic knowledge on R language, followed by some data management skills on creating new variables, recoding variables and renaming variables. These are very basic skills and may be used in every project of big data research.

  12. (Quasi)-convexification of Barta's (multi-extrema) bounding theorem: Inf_x\\big(\\ssty\\frac{H\\Phi(x)}{\\Phi(x)} \\big) \\le E_gr \\le Sup_x \\big(\\ssty\\frac{H\\Phi(x)}{\\Phi(x)} \\big)

    NASA Astrophysics Data System (ADS)

    Handy, C. R.

    2006-03-01

    There has been renewed interest in the exploitation of Barta's configuration space theorem (BCST) (Barta 1937 C. R. Acad. Sci. Paris 204 472) which bounds the ground-state energy, Inf_x\\big({{H\\Phi(x)}\\over {\\Phi(x)}} \\big ) \\leq E_gr \\leq Sup_x \\big({{H\\Phi(x)}\\over {\\Phi(x)}}\\big) , by using any Φ lying within the space of positive, bounded, and sufficiently smooth functions, {\\cal C} . Mouchet's (Mouchet 2005 J. Phys. A: Math. Gen. 38 1039) BCST analysis is based on gradient optimization (GO). However, it overlooks significant difficulties: (i) appearance of multi-extrema; (ii) inefficiency of GO for stiff (singular perturbation/strong coupling) problems; (iii) the nonexistence of a systematic procedure for arbitrarily improving the bounds within {\\cal C} . These deficiencies can be corrected by transforming BCST into a moments' representation equivalent, and exploiting a generalization of the eigenvalue moment method (EMM), within the context of the well-known generalized eigenvalue problem (GEP), as developed here. EMM is an alternative eigenenergy bounding, variational procedure, overlooked by Mouchet, which also exploits the positivity of the desired physical solution. Furthermore, it is applicable to Hermitian and non-Hermitian systems with complex-number quantization parameters (Handy and Bessis 1985 Phys. Rev. Lett. 55 931, Handy et al 1988 Phys. Rev. Lett. 60 253, Handy 2001 J. Phys. A: Math. Gen. 34 5065, Handy et al 2002 J. Phys. A: Math. Gen. 35 6359). Our analysis exploits various quasi-convexity/concavity theorems common to the GEP representation. We outline the general theory, and present some illustrative examples.

  13. Keeping up with Big Data--Designing an Introductory Data Analytics Class

    ERIC Educational Resources Information Center

    Hijazi, Sam

    2016-01-01

    Universities need to keep up with the demand of the business world when it comes to Big Data. The exponential increase in data has put additional demands on academia to meet the big gap in education. Business demand for Big Data has surpassed 1.9 million positions in 2015. Big Data, Business Intelligence, Data Analytics, and Data Mining are the…

  14. [Applications of eco-environmental big data: Progress and prospect].

    PubMed

    Zhao, Miao Miao; Zhao, Shi Cheng; Zhang, Li Yun; Zhao, Fen; Shao, Rui; Liu, Li Xiang; Zhao, Hai Feng; Xu, Ming

    2017-05-18

    With the advance of internet and wireless communication technology, the fields of ecology and environment have entered a new digital era with the amount of data growing explosively and big data technologies attracting more and more attention. The eco-environmental big data is based airborne and space-/land-based observations of ecological and environmental factors and its ultimate goal is to integrate multi-source and multi-scale data for information mining by taking advantages of cloud computation, artificial intelligence, and modeling technologies. In comparison with other fields, the eco-environmental big data has its own characteristics, such as diverse data formats and sources, data collected with various protocols and standards, and serving different clients and organizations with special requirements. Big data technology has been applied worldwide in ecological and environmental fields including global climate prediction, ecological network observation and modeling, and regional air pollution control. The development of eco-environmental big data in China is facing many problems, such as data sharing issues, outdated monitoring facilities and techno-logies, and insufficient data mining capacity. Despite all this, big data technology is critical to solving eco-environmental problems, improving prediction and warning accuracy on eco-environmental catastrophes, and boosting scientific research in the field in China. We expected that the eco-environmental big data would contribute significantly to policy making and environmental services and management, and thus the sustainable development and eco-civilization construction in China in the coming decades.

  15. Big system: Interactive graphics for the engineer

    NASA Technical Reports Server (NTRS)

    Quenneville, C. E.

    1975-01-01

    The BCS Interactive Graphics System (BIG System) approach to graphics was presented, along with several significant engineering applications. The BIG System precompiler, the graphics support library, and the function requirements of graphics applications are discussed. It was concluded that graphics standardization and a device independent code can be developed to assure maximum graphic terminal transferability.

  16. Insights into big sagebrush seedling storage practices

    Treesearch

    Emily C. Overton; Jeremiah R. Pinto; Anthony S. Davis

    2013-01-01

    Big sagebrush (Artemisia tridentata Nutt. [Asteraceae]) is an essential component of shrub-steppe ecosystems in the Great Basin of the US, where degradation due to altered fire regimes, invasive species, and land use changes have led to increased interest in the production of high-quality big sagebrush seedlings for conservation and restoration projects. Seedling...

  17. Principles of Experimental Design for Big Data Analysis.

    PubMed

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2017-08-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.

  18. Principles of Experimental Design for Big Data Analysis

    PubMed Central

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2016-01-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis. PMID:28883686

  19. Big Data and Nursing: Implications for the Future.

    PubMed

    Topaz, Maxim; Pruinelli, Lisiane

    2017-01-01

    Big data is becoming increasingly more prevalent and it affects the way nurses learn, practice, conduct research and develop policy. The discipline of nursing needs to maximize the benefits of big data to advance the vision of promoting human health and wellbeing. However, current practicing nurses, educators and nurse scientists often lack the required skills and competencies necessary for meaningful use of big data. Some of the key skills for further development include the ability to mine narrative and structured data for new care or outcome patterns, effective data visualization techniques, and further integration of nursing sensitive data into artificial intelligence systems for better clinical decision support. We provide growth-path vision recommendations for big data competencies for practicing nurses, nurse educators, researchers, and policy makers to help prepare the next generation of nurses and improve patient outcomes trough better quality connected health.

  20. Information Retrieval Using Hadoop Big Data Analysis

    NASA Astrophysics Data System (ADS)

    Motwani, Deepak; Madan, Madan Lal

    This paper concern on big data analysis which is the cognitive operation of probing huge amounts of information in an attempt to get uncovers unseen patterns. Through Big Data Analytics Applications such as public and private organization sectors have formed a strategic determination to turn big data into cut throat benefit. The primary occupation of extracting value from big data give rise to a process applied to pull information from multiple different sources; this process is known as extract transforms and lode. This paper approach extract information from log files and Research Paper, awareness reduces the efforts for blueprint finding and summarization of document from several positions. The work is able to understand better Hadoop basic concept and increase the user experience for research. In this paper, we propose an approach for analysis log files for finding concise information which is useful and time saving by using Hadoop. Our proposed approach will be applied on different research papers on a specific domain and applied for getting summarized content for further improvement and make the new content.

  1. Big Science and the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Giudice, Gian Francesco

    2012-03-01

    The Large Hadron Collider (LHC), the particle accelerator operating at CERN, is probably the most complex and ambitious scientific project ever accomplished by humanity. The sheer size of the enterprise, in terms of financial and human resources, naturally raises the question whether society should support such costly basic-research programs. I address this question by first reviewing the process that led to the emergence of Big Science and the role of large projects in the development of science and technology. I then compare the methodologies of Small and Big Science, emphasizing their mutual linkage. Finally, after examining the cost of Big Science projects, I highlight several general aspects of their beneficial implications for society.

  2. The big data processing platform for intelligent agriculture

    NASA Astrophysics Data System (ADS)

    Huang, Jintao; Zhang, Lichen

    2017-08-01

    Big data technology is another popular technology after the Internet of Things and cloud computing. Big data is widely used in many fields such as social platform, e-commerce, and financial analysis and so on. Intelligent agriculture in the course of the operation will produce large amounts of data of complex structure, fully mining the value of these data for the development of agriculture will be very meaningful. This paper proposes an intelligent data processing platform based on Storm and Cassandra to realize the storage and management of big data of intelligent agriculture.

  3. Research Activities at Fermilab for Big Data Movement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mhashilkar, Parag; Wu, Wenji; Kim, Hyun W

    2013-01-01

    Adaptation of 100GE Networking Infrastructure is the next step towards management of Big Data. Being the US Tier-1 Center for the Large Hadron Collider's (LHC) Compact Muon Solenoid (CMS) experiment and the central data center for several other large-scale research collaborations, Fermilab has to constantly deal with the scaling and wide-area distribution challenges of the big data. In this paper, we will describe some of the challenges involved in the movement of big data over 100GE infrastructure and the research activities at Fermilab to address these challenges.

  4. [Utilization of Big Data in Medicine and Future Outlook].

    PubMed

    Kinosada, Yasutomi; Uematsu, Machiko; Fujiwara, Takuya

    2016-03-01

    "Big data" is a new buzzword. The point is not to be dazzled by the volume of data, but rather to analyze it, and convert it into insights, innovations, and business value. There are also real differences between conventional analytics and big data. In this article, we show some results of big data analysis using open DPC (Diagnosis Procedure Combination) data in areas of the central part of JAPAN: Toyama, Ishikawa, Fukui, Nagano, Gifu, Aichi, Shizuoka, and Mie Prefectures. These 8 prefectures contain 51 medical administration areas called the second medical area. By applying big data analysis techniques such as k-means, hierarchical clustering, and self-organizing maps to DPC data, we can visualize the disease structure and detect similarities or variations among the 51 second medical areas. The combination of a big data analysis technique and open DPC data is a very powerful method to depict real figures on patient distribution in Japan.

  5. Research on Technology Innovation Management in Big Data Environment

    NASA Astrophysics Data System (ADS)

    Ma, Yanhong

    2018-02-01

    With the continuous development and progress of the information age, the demand for information is getting larger. The processing and analysis of information data is also moving toward the direction of scale. The increasing number of information data makes people have higher demands on processing technology. The explosive growth of information data onto the current society have prompted the advent of the era of big data. At present, people have more value and significance in producing and processing various kinds of information and data in their lives. How to use big data technology to process and analyze information data quickly to improve the level of big data management is an important stage to promote the current development of information and data processing technology in our country. To some extent, innovative research on the management methods of information technology in the era of big data can enhance our overall strength and make China be an invincible position in the development of the big data era.

  6. Association of Big Endothelin-1 with Coronary Artery Calcification.

    PubMed

    Qing, Ping; Li, Xiao-Lin; Zhang, Yan; Li, Yi-Lin; Xu, Rui-Xia; Guo, Yuan-Lin; Li, Sha; Wu, Na-Qiong; Li, Jian-Jun

    2015-01-01

    The coronary artery calcification (CAC) is clinically considered as one of the important predictors of atherosclerosis. Several studies have confirmed that endothelin-1(ET-1) plays an important role in the process of atherosclerosis formation. The aim of this study was to investigate whether big ET-1 is associated with CAC. A total of 510 consecutively admitted patients from February 2011 to May 2012 in Fu Wai Hospital were analyzed. All patients had received coronary computed tomography angiography and then divided into two groups based on the results of coronary artery calcium score (CACS). The clinical characteristics including traditional and calcification-related risk factors were collected and plasma big ET-1 level was measured by ELISA. Patients with CAC had significantly elevated big ET-1 level compared with those without CAC (0.5 ± 0.4 vs. 0.2 ± 0.2, P<0.001). In the multivariate analysis, big ET-1 (Tertile 2, HR = 3.09, 95% CI 1.66-5.74, P <0.001, Tertile3 HR = 10.42, 95% CI 3.62-29.99, P<0.001) appeared as an independent predictive factor of the presence of CAC. There was a positive correlation of the big ET-1 level with CACS (r = 0.567, p<0.001). The 10-year Framingham risk (%) was higher in the group with CACS>0 and the highest tertile of big ET-1 (P<0.01). The area under the receiver operating characteristic curve for the big ET-1 level in predicting CAC was 0.83 (95% CI 0.79-0.87, p<0.001), with a sensitivity of 70.6% and specificity of 87.7%. The data firstly demonstrated that the plasma big ET-1 level was a valuable independent predictor for CAC in our study.

  7. Meta-analyses of Big Six Interests and Big Five Personality Factors.

    ERIC Educational Resources Information Center

    Larson, Lisa M.; Rottinghaus, Patrick J.; Borgen, Fred H.

    2002-01-01

    Meta-analysis of 24 samples demonstrated overlap between Holland's vocational interest domains (measured by Self Directed Search, Strong Interest Inventory, and Vocational Preference Inventory) and Big Five personality factors (measured by Revised NEO Personalty Inventory). The link is stronger for five interest-personality pairs:…

  8. [Big Data- challenges and risks].

    PubMed

    Krauß, Manuela; Tóth, Tamás; Hanika, Heinrich; Kozlovszky, Miklós; Dinya, Elek

    2015-12-06

    The term "Big Data" is commonly used to describe the growing mass of information being created recently. New conclusions can be drawn and new services can be developed by the connection, processing and analysis of these information. This affects all aspects of life, including health and medicine. The authors review the application areas of Big Data, and present examples from health and other areas. However, there are several preconditions of the effective use of the opportunities: proper infrastructure, well defined regulatory environment with particular emphasis on data protection and privacy. These issues and the current actions for solution are also presented.

  9. Big game habitat use in southeastern Montana

    Treesearch

    James G. MacCracken; Daniel W. Uresk

    1984-01-01

    The loss of suitable, high quality habitat is a major problem facing big game managers in the western United States. Agricultural, water, road and highway, housing, and recreational development have contributed to loss of natural big game habitat (Wallmo et al. 1976, Reed 1981). In the western United States, surface mining of minerals has great potential to adversely...

  10. A Big Data Analytics Methodology Program in the Health Sector

    ERIC Educational Resources Information Center

    Lawler, James; Joseph, Anthony; Howell-Barber, H.

    2016-01-01

    The benefits of Big Data Analytics are cited frequently in the literature. However, the difficulties of implementing Big Data Analytics can limit the number of organizational projects. In this study, the authors evaluate business, procedural and technical factors in the implementation of Big Data Analytics, applying a methodology program. Focusing…

  11. View of New Big Oak Flat Road seen from Old ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of New Big Oak Flat Road seen from Old Wawona Road near location of photograph HAER CA-148-17. Note road cuts, alignment, and tunnels. Devils Dance Floor at left distance. Looking northwest - Big Oak Flat Road, Between Big Oak Flat Entrance & Merced River, Yosemite Village, Mariposa County, CA

  12. The Study of “big data” to support internal business strategists

    NASA Astrophysics Data System (ADS)

    Ge, Mei

    2018-01-01

    How is big data different from previous data analysis systems? The primary purpose behind traditional small data analytics that all managers are more or less familiar with is to support internal business strategies. But big data also offers a promising new dimension: to discover new opportunities to offer customers high-value products and services. The study focus to introduce some strategists which big data support to. Business decisions using big data can also involve some areas for analytics. They include customer satisfaction, customer journeys, supply chains, risk management, competitive intelligence, pricing, discovery and experimentation or facilitating big data discovery.

  13. Occurrence and Partial Characterization of Lettuce big vein associated virus and Mirafiori lettuce big vein virus in Lettuce in Iran.

    PubMed

    Alemzadeh, E; Izadpanah, K

    2012-12-01

    Mirafiori lettuce big vein virus (MiLBVV) and lettuce big vein associated virus (LBVaV) were found in association with big vein disease of lettuce in Iran. Analysis of part of the coat protein (CP) gene of Iranian isolates of LBVaV showed 97.1-100 % nucleotide sequence identity with other LBVaV isolates. Iranian isolates of MiLBVV belonged to subgroup A and showed 88.6-98.8 % nucleotide sequence identity with other isolates of this virus when amplified by PCR primer pair MiLV VP. The occurrence of both viruses in lettuce crop was associated with the presence of resting spores and zoosporangia of the fungus Olpidium brassicae in lettuce roots under field and greenhouse conditions. Two months after sowing lettuce seed in soil collected from a lettuce field with big vein affected plants, all seedlings were positive for LBVaV and MiLBVV, indicating soil transmission of both viruses.

  14. [Contemplation on the application of big data in clinical medicine].

    PubMed

    Lian, Lei

    2015-01-01

    Medicine is another area where big data is being used. The link between clinical treatment and outcome is the key step when applying big data in medicine. In the era of big data, it is critical to collect complete outcome data. Patient follow-up, comprehensive integration of data resources, quality control and standardized data management are the predominant approaches to avoid missing data and data island. Therefore, establishment of systemic patients follow-up protocol and prospective data management strategy are the important aspects of big data in medicine.

  15. Cincinnati Big Area Additive Manufacturing (BAAM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duty, Chad E.; Love, Lonnie J.

    Oak Ridge National Laboratory (ORNL) worked with Cincinnati Incorporated (CI) to demonstrate Big Area Additive Manufacturing which increases the speed of the additive manufacturing (AM) process by over 1000X, increases the size of parts by over 10X and shows a cost reduction of over 100X. ORNL worked with CI to transition the Big Area Additive Manufacturing (BAAM) technology from a proof-of-principle (TRL 2-3) demonstration to a prototype product stage (TRL 7-8).

  16. Solution structure of leptospiral LigA4 Big domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Song; Zhang, Jiahai; Zhang, Xuecheng

    Pathogenic Leptospiraspecies express immunoglobulin-like proteins which serve as adhesins to bind to the extracellular matrices of host cells. Leptospiral immunoglobulin-like protein A (LigA), a surface exposed protein containing tandem repeats of bacterial immunoglobulin-like (Big) domains, has been proved to be involved in the interaction of pathogenic Leptospira with mammalian host. In this study, the solution structure of the fourth Big domain of LigA (LigA4 Big domain) from Leptospira interrogans was solved by nuclear magnetic resonance (NMR). The structure of LigA4 Big domain displays a similar bacterial immunoglobulin-like fold compared with other Big domains, implying some common structural aspects of Bigmore » domain family. On the other hand, it displays some structural characteristics significantly different from classic Ig-like domain. Furthermore, Stains-all assay and NMR chemical shift perturbation revealed the Ca{sup 2+} binding property of LigA4 Big domain. - Highlights: • Determining the solution structure of a bacterial immunoglobulin-like domain from a surface protein of Leptospira. • The solution structure shows some structural characteristics significantly different from the classic Ig-like domains. • A potential Ca{sup 2+}-binding site was identified by strains-all and NMR chemical shift perturbation.« less

  17. Informatics in neurocritical care: new ideas for Big Data.

    PubMed

    Flechet, Marine; Grandas, Fabian Güiza; Meyfroidt, Geert

    2016-04-01

    Big data is the new hype in business and healthcare. Data storage and processing has become cheap, fast, and easy. Business analysts and scientists are trying to design methods to mine these data for hidden knowledge. Neurocritical care is a field that typically produces large amounts of patient-related data, and these data are increasingly being digitized and stored. This review will try to look beyond the hype, and focus on possible applications in neurointensive care amenable to Big Data research that can potentially improve patient care. The first challenge in Big Data research will be the development of large, multicenter, and high-quality databases. These databases could be used to further investigate recent findings from mathematical models, developed in smaller datasets. Randomized clinical trials and Big Data research are complementary. Big Data research might be used to identify subgroups of patients that could benefit most from a certain intervention, or can be an alternative in areas where randomized clinical trials are not possible. The processing and the analysis of the large amount of patient-related information stored in clinical databases is beyond normal human cognitive ability. Big Data research applications have the potential to discover new medical knowledge, and improve care in the neurointensive care unit.

  18. Big Data Analytics for Genomic Medicine.

    PubMed

    He, Karen Y; Ge, Dongliang; He, Max M

    2017-02-15

    Genomic medicine attempts to build individualized strategies for diagnostic or therapeutic decision-making by utilizing patients' genomic information. Big Data analytics uncovers hidden patterns, unknown correlations, and other insights through examining large-scale various data sets. While integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a Big Data infrastructure exhibit challenges, they also provide a feasible opportunity to develop an efficient and effective approach to identify clinically actionable genetic variants for individualized diagnosis and therapy. In this paper, we review the challenges of manipulating large-scale next-generation sequencing (NGS) data and diverse clinical data derived from the EHRs for genomic medicine. We introduce possible solutions for different challenges in manipulating, managing, and analyzing genomic and clinical data to implement genomic medicine. Additionally, we also present a practical Big Data toolset for identifying clinically actionable genetic variants using high-throughput NGS data and EHRs.

  19. A Big Data Platform for Storing, Accessing, Mining and Learning Geospatial Data

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Bambacus, M.; Duffy, D.; Little, M. M.

    2017-12-01

    Big Data is becoming a norm in geoscience domains. A platform that is capable to effiently manage, access, analyze, mine, and learn the big data for new information and knowledge is desired. This paper introduces our latest effort on developing such a platform based on our past years' experiences on cloud and high performance computing, analyzing big data, comparing big data containers, and mining big geospatial data for new information. The platform includes four layers: a) the bottom layer includes a computing infrastructure with proper network, computer, and storage systems; b) the 2nd layer is a cloud computing layer based on virtualization to provide on demand computing services for upper layers; c) the 3rd layer is big data containers that are customized for dealing with different types of data and functionalities; d) the 4th layer is a big data presentation layer that supports the effient management, access, analyses, mining and learning of big geospatial data.

  20. The New Improved Big6 Workshop Handbook. Professional Growth Series.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.; Berkowitz, Robert E.

    This handbook is intended to help classroom teachers, teacher-librarians, technology teachers, administrators, parents, community members, and students to learn about the Big6 Skills approach to information and technology skills, to use the Big6 process in their own activities, and to implement a Big6 information and technology skills program. The…

  1. A Hierarchical Visualization Analysis Model of Power Big Data

    NASA Astrophysics Data System (ADS)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  2. Big Data: More than Just Big and More than Just Data.

    PubMed

    Spencer, Gregory A

    2017-01-01

    According to an report, 90 percent of the data in the world today were created in the past two years. This statistic is not surprising given the explosion of mobile phones and other devices that generate data, the Internet of Things (e.g., smart refrigerators), and metadata (data about data). While it might be a stretch to figure out how a healthcare organization can use data generated from an ice maker, data from a plethora of rich and useful sources, when combined with an organization's own data, can produce improved results. How can healthcare organizations leverage these rich and diverse data sources to improve patients' health and make their businesses more competitive? The authors of the two feature articles in this issue of Frontiers provide tangible examples of how their organizations are using big data to meaningfully improve healthcare. Sentara Healthcare and Carolinas HealthCare System both use big data in creative ways that differ because of different business situations, yet are also similar in certain respects.

  3. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    PubMed

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  4. Big data, smart homes and ambient assisted living.

    PubMed

    Vimarlund, V; Wass, S

    2014-08-15

    To discuss how current research in the area of smart homes and ambient assisted living will be influenced by the use of big data. A scoping review of literature published in scientific journals and conference proceedings was performed, focusing on smart homes, ambient assisted living and big data over the years 2011-2014. The health and social care market has lagged behind other markets when it comes to the introduction of innovative IT solutions and the market faces a number of challenges as the use of big data will increase. First, there is a need for a sustainable and trustful information chain where the needed information can be transferred from all producers to all consumers in a structured way. Second, there is a need for big data strategies and policies to manage the new situation where information is handled and transferred independently of the place of the expertise. Finally, there is a possibility to develop new and innovative business models for a market that supports cloud computing, social media, crowdsourcing etc. The interdisciplinary area of big data, smart homes and ambient assisted living is no longer only of interest for IT developers, it is also of interest for decision makers as customers make more informed choices among today's services. In the future it will be of importance to make information usable for managers and improve decision making, tailor smart home services based on big data, develop new business models, increase competition and identify policies to ensure privacy, security and liability.

  5. Big Data, Smart Homes and Ambient Assisted Living

    PubMed Central

    Wass, S.

    2014-01-01

    Summary Objectives To discuss how current research in the area of smart homes and ambient assisted living will be influenced by the use of big data. Methods A scoping review of literature published in scientific journals and conference proceedings was performed, focusing on smart homes, ambient assisted living and big data over the years 2011-2014. Results The health and social care market has lagged behind other markets when it comes to the introduction of innovative IT solutions and the market faces a number of challenges as the use of big data will increase. First, there is a need for a sustainable and trustful information chain where the needed information can be transferred from all producers to all consumers in a structured way. Second, there is a need for big data strategies and policies to manage the new situation where information is handled and transferred independently of the place of the expertise. Finally, there is a possibility to develop new and innovative business models for a market that supports cloud computing, social media, crowdsourcing etc. Conclusions The interdisciplinary area of big data, smart homes and ambient assisted living is no longer only of interest for IT developers, it is also of interest for decision makers as customers make more informed choices among today’s services. In the future it will be of importance to make information usable for managers and improve decision making, tailor smart home services based on big data, develop new business models, increase competition and identify policies to ensure privacy, security and liability. PMID:25123734

  6. Classical and quantum Big Brake cosmology for scalar field and tachyonic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamenshchik, A. Yu.; Manti, S.

    We study a relation between the cosmological singularities in classical and quantum theory, comparing the classical and quantum dynamics in some models possessing the Big Brake singularity - the model based on a scalar field and two models based on a tachyon-pseudo-tachyon field . It is shown that the effect of quantum avoidance is absent for the soft singularities of the Big Brake type while it is present for the Big Bang and Big Crunch singularities. Thus, there is some kind of a classical - quantum correspondence, because soft singularities are traversable in classical cosmology, while the strong Big Bangmore » and Big Crunch singularities are not traversable.« less

  7. Big trees in the southern forest inventory

    Treesearch

    Christopher M. Oswalt; Sonja N. Oswalt; Thomas J. Brandeis

    2010-01-01

    Big trees fascinate people worldwide, inspiring respect, awe, and oftentimes, even controversy. This paper uses a modified version of American Forests’ Big Trees Measuring Guide point system (May 1990) to rank trees sampled between January of 1998 and September of 2007 on over 89,000 plots by the Forest Service, U.S. Department of Agriculture, Forest Inventory and...

  8. A genetic algorithm-based job scheduling model for big data analytics.

    PubMed

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  9. ELM Meets Urban Big Data Analysis: Case Studies

    PubMed Central

    Chen, Huajun; Chen, Jiaoyan

    2016-01-01

    In the latest years, the rapid progress of urban computing has engendered big issues, which creates both opportunities and challenges. The heterogeneous and big volume of data and the big difference between physical and virtual worlds have resulted in lots of problems in quickly solving practical problems in urban computing. In this paper, we propose a general application framework of ELM for urban computing. We present several real case studies of the framework like smog-related health hazard prediction and optimal retain store placement. Experiments involving urban data in China show the efficiency, accuracy, and flexibility of our proposed framework. PMID:27656203

  10. Big Data in Psychology: Introduction to Special Issue

    PubMed Central

    Harlow, Lisa L.; Oswald, Frederick L.

    2016-01-01

    The introduction to this special issue on psychological research involving big data summarizes the highlights of 10 articles that address a number of important and inspiring perspectives, issues, and applications. Four common themes that emerge in the articles with respect to psychological research conducted in the area of big data are mentioned, including: 1. The benefits of collaboration across disciplines, such as those in the social sciences, applied statistics, and computer science. Doing so assists in grounding big data research in sound theory and practice, as well as in affording effective data retrieval and analysis. 2. Availability of large datasets on Facebook, Twitter, and other social media sites that provide a psychological window into the attitudes and behaviors of a broad spectrum of the population. 3. Identifying, addressing, and being sensitive to ethical considerations when analyzing large datasets gained from public or private sources. 4. The unavoidable necessity of validating predictive models in big data by applying a model developed on one dataset to a separate set of data or hold-out sample. Translational abstracts that summarize the articles in very clear and understandable terms are included in Appendix A, and a glossary of terms relevant to big data research discussed in the articles is presented in Appendix B. PMID:27918177

  11. Beyond simple charts: Design of visualizations for big health data

    PubMed Central

    Ola, Oluwakemi; Sedig, Kamran

    2016-01-01

    Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data’s utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data. PMID:28210416

  12. Beyond simple charts: Design of visualizations for big health data.

    PubMed

    Ola, Oluwakemi; Sedig, Kamran

    2016-01-01

    Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data's utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data.

  13. BIG: a large-scale data integration tool for renal physiology.

    PubMed

    Zhao, Yue; Yang, Chin-Rang; Raghuram, Viswanathan; Parulekar, Jaya; Knepper, Mark A

    2016-10-01

    Due to recent advances in high-throughput techniques, we and others have generated multiple proteomic and transcriptomic databases to describe and quantify gene expression, protein abundance, or cellular signaling on the scale of the whole genome/proteome in kidney cells. The existence of so much data from diverse sources raises the following question: "How can researchers find information efficiently for a given gene product over all of these data sets without searching each data set individually?" This is the type of problem that has motivated the "Big-Data" revolution in Data Science, which has driven progress in fields such as marketing. Here we present an online Big-Data tool called BIG (Biological Information Gatherer) that allows users to submit a single online query to obtain all relevant information from all indexed databases. BIG is accessible at http://big.nhlbi.nih.gov/.

  14. BigData as a Driver for Capacity Building in Astrophysics

    NASA Astrophysics Data System (ADS)

    Shastri, Prajval

    2015-08-01

    Exciting public interest in astrophysics acquires new significance in the era of Big Data. Since Big Data involves advanced technologies of both software and hardware, astrophysics with Big Data has the potential to inspire young minds with diverse inclinations - i.e., not just those attracted to physics but also those pursuing engineering careers. Digital technologies have become steadily cheaper, which can enable expansion of the Big Data user pool considerably, especially to communities that may not yet be in the astrophysics mainstream, but have high potential because of access to thesetechnologies. For success, however, capacity building at the early stages becomes key. The development of on-line pedagogical resources in astrophysics, astrostatistics, data-mining and data visualisation that are designed around the big facilities of the future can be an important effort that drives such capacity building, especially if facilitated by the IAU.

  15. The dominance of big pharma: power.

    PubMed

    Edgar, Andrew

    2013-05-01

    The purpose of this paper is to provide a normative model for the assessment of the exercise of power by Big Pharma. By drawing on the work of Steven Lukes, it will be argued that while Big Pharma is overtly highly regulated, so that its power is indeed restricted in the interests of patients and the general public, the industry is still able to exercise what Lukes describes as a third dimension of power. This entails concealing the conflicts of interest and grievances that Big Pharma may have with the health care system, physicians and patients, crucially through rhetorical engagements with Patient Advocacy Groups that seek to shape public opinion, and also by marginalising certain groups, excluding them from debates over health care resource allocation. Three issues will be examined: the construction of a conception of the patient as expert patient or consumer; the phenomenon of disease mongering; the suppression or distortion of debates over resource allocation.

  16. Big data for space situation awareness

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Pugh, Mark; Sheaff, Carolyn; Raquepas, Joe; Rocci, Peter

    2017-05-01

    Recent advances in big data (BD) have focused research on the volume, velocity, veracity, and variety of data. These developments enable new opportunities in information management, visualization, machine learning, and information fusion that have potential implications for space situational awareness (SSA). In this paper, we explore some of these BD trends as applicable for SSA towards enhancing the space operating picture. The BD developments could increase in measures of performance and measures of effectiveness for future management of the space environment. The global SSA influences include resident space object (RSO) tracking and characterization, cyber protection, remote sensing, and information management. The local satellite awareness can benefit from space weather, health monitoring, and spectrum management for situation space understanding. One area in big data of importance to SSA is value - getting the correct data/information at the right time, which corresponds to SSA visualization for the operator. A SSA big data example is presented supporting disaster relief for space situation awareness, assessment, and understanding.

  17. Big Data Analytics for Genomic Medicine

    PubMed Central

    He, Karen Y.; Ge, Dongliang; He, Max M.

    2017-01-01

    Genomic medicine attempts to build individualized strategies for diagnostic or therapeutic decision-making by utilizing patients’ genomic information. Big Data analytics uncovers hidden patterns, unknown correlations, and other insights through examining large-scale various data sets. While integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a Big Data infrastructure exhibit challenges, they also provide a feasible opportunity to develop an efficient and effective approach to identify clinically actionable genetic variants for individualized diagnosis and therapy. In this paper, we review the challenges of manipulating large-scale next-generation sequencing (NGS) data and diverse clinical data derived from the EHRs for genomic medicine. We introduce possible solutions for different challenges in manipulating, managing, and analyzing genomic and clinical data to implement genomic medicine. Additionally, we also present a practical Big Data toolset for identifying clinically actionable genetic variants using high-throughput NGS data and EHRs. PMID:28212287

  18. Adapting bioinformatics curricula for big data

    PubMed Central

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  19. The caBIG Terminology Review Process

    PubMed Central

    Cimino, James J.; Hayamizu, Terry F.; Bodenreider, Olivier; Davis, Brian; Stafford, Grace A.; Ringwald, Martin

    2009-01-01

    The National Cancer Institute (NCI) is developing an integrated biomedical informatics infrastructure, the cancer Biomedical Informatics Grid (caBIG®), to support collaboration within the cancer research community. A key part of the caBIG architecture is the establishment of terminology standards for representing data. In order to evaluate the suitability of existing controlled terminologies, the caBIG Vocabulary and Data Elements Workspace (VCDE WS) working group has developed a set of criteria that serve to assess a terminology's structure, content, documentation, and editorial process. This paper describes the evolution of these criteria and the results of their use in evaluating four standard terminologies: the Gene Ontology (GO), the NCI Thesaurus (NCIt), the Common Terminology for Adverse Events (known as CTCAE), and the laboratory portion of the Logical Objects, Identifiers, Names and Codes (LOINC). The resulting caBIG criteria are presented as a matrix that may be applicable to any terminology standardization effort. PMID:19154797

  20. [Big data approaches in psychiatry: examples in depression research].

    PubMed

    Bzdok, D; Karrer, T M; Habel, U; Schneider, F

    2017-11-29

    The exploration and therapy of depression is aggravated by heterogeneous etiological mechanisms and various comorbidities. With the growing trend towards big data in psychiatry, research and therapy can increasingly target the individual patient. This novel objective requires special methods of analysis. The possibilities and challenges of the application of big data approaches in depression are examined in closer detail. Examples are given to illustrate the possibilities of big data approaches in depression research. Modern machine learning methods are compared to traditional statistical methods in terms of their potential in applications to depression. Big data approaches are particularly suited to the analysis of detailed observational data, the prediction of single data points or several clinical variables and the identification of endophenotypes. A current challenge lies in the transfer of results into the clinical treatment of patients with depression. Big data approaches enable biological subtypes in depression to be identified and predictions in individual patients to be made. They have enormous potential for prevention, early diagnosis, treatment choice and prognosis of depression as well as for treatment development.

  1. A practical guide to big data research in psychology.

    PubMed

    Chen, Eric Evan; Wojcik, Sean P

    2016-12-01

    The massive volume of data that now covers a wide variety of human behaviors offers researchers in psychology an unprecedented opportunity to conduct innovative theory- and data-driven field research. This article is a practical guide to conducting big data research, covering data management, acquisition, processing, and analytics (including key supervised and unsupervised learning data mining methods). It is accompanied by walkthrough tutorials on data acquisition, text analysis with latent Dirichlet allocation topic modeling, and classification with support vector machines. Big data practitioners in academia, industry, and the community have built a comprehensive base of tools and knowledge that makes big data research accessible to researchers in a broad range of fields. However, big data research does require knowledge of software programming and a different analytical mindset. For those willing to acquire the requisite skills, innovative analyses of unexpected or previously untapped data sources can offer fresh ways to develop, test, and extend theories. When conducted with care and respect, big data research can become an essential complement to traditional research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Nursing Knowledge: Big Data Science-Implications for Nurse Leaders.

    PubMed

    Westra, Bonnie L; Clancy, Thomas R; Sensmeier, Joyce; Warren, Judith J; Weaver, Charlotte; Delaney, Connie W

    2015-01-01

    The integration of Big Data from electronic health records and other information systems within and across health care enterprises provides an opportunity to develop actionable predictive models that can increase the confidence of nursing leaders' decisions to improve patient outcomes and safety and control costs. As health care shifts to the community, mobile health applications add to the Big Data available. There is an evolving national action plan that includes nursing data in Big Data science, spearheaded by the University of Minnesota School of Nursing. For the past 3 years, diverse stakeholders from practice, industry, education, research, and professional organizations have collaborated through the "Nursing Knowledge: Big Data Science" conferences to create and act on recommendations for inclusion of nursing data, integrated with patient-generated, interprofessional, and contextual data. It is critical for nursing leaders to understand the value of Big Data science and the ways to standardize data and workflow processes to take advantage of newer cutting edge analytics to support analytic methods to control costs and improve patient quality and safety.

  3. From big data to deep insight in developmental science.

    PubMed

    Gilmore, Rick O

    2016-01-01

    The use of the term 'big data' has grown substantially over the past several decades and is now widespread. In this review, I ask what makes data 'big' and what implications the size, density, or complexity of datasets have for the science of human development. A survey of existing datasets illustrates how existing large, complex, multilevel, and multimeasure data can reveal the complexities of developmental processes. At the same time, significant technical, policy, ethics, transparency, cultural, and conceptual issues associated with the use of big data must be addressed. Most big developmental science data are currently hard to find and cumbersome to access, the field lacks a culture of data sharing, and there is no consensus about who owns or should control research data. But, these barriers are dissolving. Developmental researchers are finding new ways to collect, manage, store, share, and enable others to reuse data. This promises a future in which big data can lead to deeper insights about some of the most profound questions in behavioral science. © 2016 The Authors. WIREs Cognitive Science published by Wiley Periodicals, Inc.

  4. Translating Big Data into Smart Data for Veterinary Epidemiology

    PubMed Central

    VanderWaal, Kimberly; Morrison, Robert B.; Neuhauser, Claudia; Vilalta, Carles; Perez, Andres M.

    2017-01-01

    The increasing availability and complexity of data has led to new opportunities and challenges in veterinary epidemiology around how to translate abundant, diverse, and rapidly growing “big” data into meaningful insights for animal health. Big data analytics are used to understand health risks and minimize the impact of adverse animal health issues through identifying high-risk populations, combining data or processes acting at multiple scales through epidemiological modeling approaches, and harnessing high velocity data to monitor animal health trends and detect emerging health threats. The advent of big data requires the incorporation of new skills into veterinary epidemiology training, including, for example, machine learning and coding, to prepare a new generation of scientists and practitioners to engage with big data. Establishing pipelines to analyze big data in near real-time is the next step for progressing from simply having “big data” to create “smart data,” with the objective of improving understanding of health risks, effectiveness of management and policy decisions, and ultimately preventing or at least minimizing the impact of adverse animal health issues. PMID:28770216

  5. Frontiers of Big Bang cosmology and primordial nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Mathews, Grant J.; Cheoun, Myung-Ki; Kajino, Toshitaka; Kusakabe, Motohiko; Yamazaki, Dai G.

    2012-11-01

    We summarize some current research on the formation and evolution of the universe and overview some of the key questions surrounding the the big bang. There are really only two observational cosmological probes of the physics of the early universe. Of those two, the only probe during the relevant radiation dominated epoch is the yield of light elements during the epoch of big bang nucleosynthesis. The synthesis of light elements occurs in the temperature regime from 108 to 1010 K and times of about 1 to 104 sec into the big bang. The other probe is the spectrum of temperature fluctuations in the CMB which (among other things) contains information of the first quantum fluctuations in the universe, along with details of the distribution and evolution of dark matter, baryonic matter and photons up to the surface of photon last scattering. Here, we emphasize the role of these probes in answering some key questions of the big bang and early universe cosmology.

  6. The BIG Score and Prediction of Mortality in Pediatric Blunt Trauma.

    PubMed

    Davis, Adrienne L; Wales, Paul W; Malik, Tahira; Stephens, Derek; Razik, Fathima; Schuh, Suzanne

    2015-09-01

    To examine the association between in-hospital mortality and the BIG (composed of the base deficit [B], International normalized ratio [I], Glasgow Coma Scale [G]) score measured on arrival to the emergency department in pediatric blunt trauma patients, adjusted for pre-hospital intubation, volume administration, and presence of hypotension and head injury. We also examined the association between the BIG score and mortality in patients requiring admission to the intensive care unit (ICU). A retrospective 2001-2012 trauma database review of patients with blunt trauma ≤ 17 years old with an Injury Severity score ≥ 12. Charts were reviewed for in-hospital mortality, components of the BIG score upon arrival to the emergency department, prehospital intubation, crystalloids ≥ 20 mL/kg, presence of hypotension, head injury, and disposition. 50/621 (8%) of the study patients died. Independent mortality predictors were the BIG score (OR 11, 95% CI 6-25), prior fluid bolus (OR 3, 95% CI 1.3-9), and prior intubation (OR 8, 95% CI 2-40). The area under the receiver operating characteristic curve was 0.95 (CI 0.93-0.98), with the optimal BIG cutoff of 16. With BIG <16, death rate was 3/496 (0.006, 95% CI 0.001-0.007) vs 47/125 (0.38, 95% CI 0.15-0.7) with BIG ≥ 16, (P < .0001). In patients requiring admission to the ICU, the BIG score remained predictive of mortality (OR 14.3, 95% CI 7.3-32, P < .0001). The BIG score accurately predicts mortality in a population of North American pediatric patients with blunt trauma independent of pre-hospital interventions, presence of head injury, and hypotension, and identifies children with a high probability of survival (BIG <16). The BIG score is also associated with mortality in pediatric patients with trauma requiring admission to the ICU. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Big-City Rules

    ERIC Educational Resources Information Center

    Gordon, Dan

    2011-01-01

    When it comes to implementing innovative classroom technology programs, urban school districts face significant challenges stemming from their big-city status. These range from large bureaucracies, to scalability, to how to meet the needs of a more diverse group of students. Because of their size, urban districts tend to have greater distance…

  8. Big(ger) Data as Better Data in Open Distance Learning

    ERIC Educational Resources Information Center

    Prinsloo, Paul; Archer, Elizabeth; Barnes, Glen; Chetty, Yuraisha; van Zyl, Dion

    2015-01-01

    In the context of the hype, promise and perils of Big Data and the currently dominant paradigm of data-driven decision-making, it is important to critically engage with the potential of Big Data for higher education. We do not question the potential of Big Data, but we do raise a number of issues, and present a number of theses to be seriously…

  9. Big Data Analysis Framework for Healthcare and Social Sectors in Korea

    PubMed Central

    Song, Tae-Min

    2015-01-01

    Objectives We reviewed applications of big data analysis of healthcare and social services in developed countries, and subsequently devised a framework for such an analysis in Korea. Methods We reviewed the status of implementing big data analysis of health care and social services in developed countries, and strategies used by the Ministry of Health and Welfare of Korea (Government 3.0). We formulated a conceptual framework of big data in the healthcare and social service sectors at the national level. As a specific case, we designed a process and method of social big data analysis on suicide buzz. Results Developed countries (e.g., the United States, the UK, Singapore, Australia, and even OECD and EU) are emphasizing the potential of big data, and using it as a tool to solve their long-standing problems. Big data strategies for the healthcare and social service sectors were formulated based on an ICT-based policy of current government and the strategic goals of the Ministry of Health and Welfare. We suggest a framework of big data analysis in the healthcare and welfare service sectors separately and assigned them tentative names: 'health risk analysis center' and 'integrated social welfare service network'. A framework of social big data analysis is presented by applying it to the prevention and proactive detection of suicide in Korea. Conclusions There are some concerns with the utilization of big data in the healthcare and social welfare sectors. Thus, research on these issues must be conducted so that sophisticated and practical solutions can be reached. PMID:25705552

  10. Big data analysis framework for healthcare and social sectors in Korea.

    PubMed

    Song, Tae-Min; Ryu, Seewon

    2015-01-01

    We reviewed applications of big data analysis of healthcare and social services in developed countries, and subsequently devised a framework for such an analysis in Korea. We reviewed the status of implementing big data analysis of health care and social services in developed countries, and strategies used by the Ministry of Health and Welfare of Korea (Government 3.0). We formulated a conceptual framework of big data in the healthcare and social service sectors at the national level. As a specific case, we designed a process and method of social big data analysis on suicide buzz. Developed countries (e.g., the United States, the UK, Singapore, Australia, and even OECD and EU) are emphasizing the potential of big data, and using it as a tool to solve their long-standing problems. Big data strategies for the healthcare and social service sectors were formulated based on an ICT-based policy of current government and the strategic goals of the Ministry of Health and Welfare. We suggest a framework of big data analysis in the healthcare and welfare service sectors separately and assigned them tentative names: 'health risk analysis center' and 'integrated social welfare service network'. A framework of social big data analysis is presented by applying it to the prevention and proactive detection of suicide in Korea. There are some concerns with the utilization of big data in the healthcare and social welfare sectors. Thus, research on these issues must be conducted so that sophisticated and practical solutions can be reached.

  11. Female "Big Fish" Swimming against the Tide: The "Big-Fish-Little-Pond Effect" and Gender-Ratio in Special Gifted Classes

    ERIC Educational Resources Information Center

    Preckel, Franzis; Zeidner, Moshe; Goetz, Thomas; Schleyer, Esther Jane

    2008-01-01

    This study takes a second look at the "big-fish-little-pond effect" (BFLPE) on a national sample of 769 gifted Israeli students (32% female) previously investigated by Zeidner and Schleyer (Zeidner, M., & Schleyer, E. J., (1999a). "The big-fish-little-pond effect for academic self-concept, test anxiety, and school grades in…

  12. Privacy Challenges of Genomic Big Data.

    PubMed

    Shen, Hong; Ma, Jian

    2017-01-01

    With the rapid advancement of high-throughput DNA sequencing technologies, genomics has become a big data discipline where large-scale genetic information of human individuals can be obtained efficiently with low cost. However, such massive amount of personal genomic data creates tremendous challenge for privacy, especially given the emergence of direct-to-consumer (DTC) industry that provides genetic testing services. Here we review the recent development in genomic big data and its implications on privacy. We also discuss the current dilemmas and future challenges of genomic privacy.

  13. Big two personality and big three mate preferences: similarity attracts, but country-level mate preferences crucially matter.

    PubMed

    Gebauer, Jochen E; Leary, Mark R; Neberich, Wiebke

    2012-12-01

    People differ regarding their "Big Three" mate preferences of attractiveness, status, and interpersonal warmth. We explain these differences by linking them to the "Big Two" personality dimensions of agency/competence and communion/warmth. The similarity-attracts hypothesis predicts that people high in agency prefer attractiveness and status in mates, whereas those high in communion prefer warmth. However, these effects may be moderated by agentics' tendency to contrast from ambient culture, and communals' tendency to assimilate to ambient culture. Attending to such agentic-cultural-contrast and communal-cultural-assimilation crucially qualifies the similarity-attracts hypothesis. Data from 187,957 online-daters across 11 countries supported this model for each of the Big Three. For example, agentics-more so than communals-preferred attractiveness, but this similarity-attracts effect virtually vanished in attractiveness-valuing countries. This research may reconcile inconsistencies in the literature while utilizing nonhypothetical and consequential mate preference reports that, for the first time, were directly linked to mate choice.

  14. Big Biology: Supersizing Science During the Emergence of the 21st Century

    PubMed Central

    Vermeulen, Niki

    2017-01-01

    Ist Biologie das jüngste Mitglied in der Familie von Big Science? Die vermehrte Zusammenarbeit in der biologischen Forschung wurde in der Folge des Human Genome Project zwar zum Gegenstand hitziger Diskussionen, aber Debatten und Reflexionen blieben meist im Polemischen verhaftet und zeigten eine begrenzte Wertschätzung für die Vielfalt und Erklärungskraft des Konzepts von Big Science. Zur gleichen Zeit haben Wissenschafts- und Technikforscher/innen in ihren Beschreibungen des Wandels der Forschungslandschaft die Verwendung des Begriffs Big Science gemieden. Dieser interdisziplinäre Artikel kombiniert eine begriffliche Analyse des Konzepts von Big Science mit unterschiedlichen Daten und Ideen aus einer Multimethodenuntersuchung mehrerer großer Forschungsprojekte in der Biologie. Ziel ist es, ein empirisch fundiertes, nuanciertes und analytisch nützliches Verständnis von Big Biology zu entwickeln und die normativen Debatten mit ihren einfachen Dichotomien und rhetorischen Positionen hinter sich zu lassen. Zwar kann das Konzept von Big Science als eine Mode in der Wissenschaftspolitik gesehen werden – inzwischen vielleicht sogar als ein altmodisches Konzept –, doch lautet meine innovative Argumentation, dass dessen analytische Verwendung unsere Aufmerksamkeit auf die Ausweitung der Zusammenarbeit in den Biowissenschaften lenkt. Die Analyse von Big Biology zeigt Unterschiede zu Big Physics und anderen Formen von Big Science, namentlich in den Mustern der Forschungsorganisation, der verwendeten Technologien und der gesellschaftlichen Zusammenhänge, in denen sie tätig ist. So können Reflexionen über Big Science, Big Biology und ihre Beziehungen zur Wissensproduktion die jüngsten Behauptungen über grundlegende Veränderungen in der Life Science-Forschung in einen historischen Kontext stellen. PMID:27215209

  15. Preliminary survey of the mayflies (Ephemeroptera) and caddisflies (Trichoptera) of Big Bend Ranch State Park and Big Bend National Park

    PubMed Central

    Baumgardner, David E.; Bowles, David E.

    2005-01-01

    The mayfly (Insecta: Ephemeroptera) and caddisfly (Insecta: Trichoptera) fauna of Big Bend National Park and Big Bend Ranch State Park are reported based upon numerous records. For mayflies, sixteen species representing four families and twelve genera are reported. By comparison, thirty-five species of caddisflies were collected during this study representing seventeen genera and nine families. Although the Rio Grande supports the greatest diversity of mayflies (n=9) and caddisflies (n=14), numerous spring-fed creeks throughout the park also support a wide variety of species. A general lack of data on the distribution and abundance of invertebrates in Big Bend National and State Park is discussed, along with the importance of continuing this type of research. PMID:17119610

  16. Fixing the Big Bang Theory's Lithium Problem

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2017-02-01

    How did our universe come into being? The Big Bang theory is a widely accepted and highly successful cosmological model of the universe, but it does introduce one puzzle: the cosmological lithium problem. Have scientists now found a solution?Too Much LithiumIn the Big Bang theory, the universe expanded rapidly from a very high-density and high-temperature state dominated by radiation. This theory has been validated again and again: the discovery of the cosmic microwave background radiation and observations of the large-scale structure of the universe both beautifully support the Big Bang theory, for instance. But one pesky trouble-spot remains: the abundance of lithium.The arrows show the primary reactions involved in Big Bang nucleosynthesis, and their flux ratios, as predicted by the authors model, are given on the right. Synthesizing primordial elements is complicated! [Hou et al. 2017]According to Big Bang nucleosynthesis theory, primordial nucleosynthesis ran wild during the first half hour of the universes existence. This produced most of the universes helium and small amounts of other light nuclides, including deuterium and lithium.But while predictions match the observed primordial deuterium and helium abundances, Big Bang nucleosynthesis theory overpredicts the abundance of primordial lithium by about a factor of three. This inconsistency is known as the cosmological lithium problem and attempts to resolve it using conventional astrophysics and nuclear physics over the past few decades have not been successful.In a recent publicationled by Suqing Hou (Institute of Modern Physics, Chinese Academy of Sciences) and advisorJianjun He (Institute of Modern Physics National Astronomical Observatories, Chinese Academy of Sciences), however, a team of scientists has proposed an elegant solution to this problem.Time and temperature evolution of the abundances of primordial light elements during the beginning of the universe. The authors model (dotted lines

  17. BIG: a large-scale data integration tool for renal physiology

    PubMed Central

    Zhao, Yue; Yang, Chin-Rang; Raghuram, Viswanathan; Parulekar, Jaya

    2016-01-01

    Due to recent advances in high-throughput techniques, we and others have generated multiple proteomic and transcriptomic databases to describe and quantify gene expression, protein abundance, or cellular signaling on the scale of the whole genome/proteome in kidney cells. The existence of so much data from diverse sources raises the following question: “How can researchers find information efficiently for a given gene product over all of these data sets without searching each data set individually?” This is the type of problem that has motivated the “Big-Data” revolution in Data Science, which has driven progress in fields such as marketing. Here we present an online Big-Data tool called BIG (Biological Information Gatherer) that allows users to submit a single online query to obtain all relevant information from all indexed databases. BIG is accessible at http://big.nhlbi.nih.gov/. PMID:27279488

  18. AmeriFlux US-Rms RCEW Mountain Big Sagebrush

    DOE Data Explorer

    Flerchinger, Gerald [USDA Agricultural Research Service

    2017-01-01

    This is the AmeriFlux version of the carbon flux data for the site US-Rms RCEW Mountain Big Sagebrush. Site Description - The site is located on the USDA-ARS's Reynolds Creek Experimental Watershed. It is dominated by mountain big sagebrush on land managed by USDI Bureau of Land Management.

  19. Vertebrate richness and biogeography in the Big Thicket of Texas

    Treesearch

    Michael H MacRoberts; Barbara R. MacRoberts; D. Craig Rudolph

    2010-01-01

    The Big Thicket of Texas has been described as rich in species and a “crossroads:” a place where organisms from many different regions meet. We examine the species richness and regional affiliations of Big Thicket vertebrates. We found that the Big Thicket is neither exceptionally rich in vertebrates nor is it a crossroads for vertebrates. Its vertebrate fauna is...

  20. Creating value in health care through big data: opportunities and policy implications.

    PubMed

    Roski, Joachim; Bo-Linn, George W; Andrews, Timothy A

    2014-07-01

    Big data has the potential to create significant value in health care by improving outcomes while lowering costs. Big data's defining features include the ability to handle massive data volume and variety at high velocity. New, flexible, and easily expandable information technology (IT) infrastructure, including so-called data lakes and cloud data storage and management solutions, make big-data analytics possible. However, most health IT systems still rely on data warehouse structures. Without the right IT infrastructure, analytic tools, visualization approaches, work flows, and interfaces, the insights provided by big data are likely to be limited. Big data's success in creating value in the health care sector may require changes in current polices to balance the potential societal benefits of big-data approaches and the protection of patients' confidentiality. Other policy implications of using big data are that many current practices and policies related to data use, access, sharing, privacy, and stewardship need to be revised. Project HOPE—The People-to-People Health Foundation, Inc.

  1. Big Data, Big Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, Bill

    Data—lots of data—generated in seconds and piling up on the internet, streaming and stored in countless databases. Big data is important for commerce, society and our nation’s security. Yet the volume, velocity, variety and veracity of data is simply too great for any single analyst to make sense of alone. It requires advanced, data-intensive computing. Simply put, data-intensive computing is the use of sophisticated computers to sort through mounds of information and present analysts with solutions in the form of graphics, scenarios, formulas, new hypotheses and more. This scientific capability is foundational to PNNL’s energy, environment and security missions. Seniormore » Scientist and Division Director Bill Pike and his team are developing analytic tools that are used to solve important national challenges, including cyber systems defense, power grid control systems, intelligence analysis, climate change and scientific exploration.« less

  2. Developing semi-analytical solution for multiple-zone transient storage model with spatially non-uniform storage

    NASA Astrophysics Data System (ADS)

    Deng, Baoqing; Si, Yinbing; Wang, Jia

    2017-12-01

    Transient storages may vary along the stream due to stream hydraulic conditions and the characteristics of storage. Analytical solutions of transient storage models in literature didn't cover the spatially non-uniform storage. A novel integral transform strategy is presented that simultaneously performs integral transforms to the concentrations in the stream and in storage zones by using the single set of eigenfunctions derived from the advection-diffusion equation of the stream. The semi-analytical solution of the multiple-zone transient storage model with the spatially non-uniform storage is obtained by applying the generalized integral transform technique to all partial differential equations in the multiple-zone transient storage model. The derived semi-analytical solution is validated against the field data in literature. Good agreement between the computed data and the field data is obtained. Some illustrative examples are formulated to demonstrate the applications of the present solution. It is shown that solute transport can be greatly affected by the variation of mass exchange coefficient and the ratio of cross-sectional areas. When the ratio of cross-sectional areas is big or the mass exchange coefficient is small, more reaches are recommended to calibrate the parameter.

  3. Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm

    NASA Astrophysics Data System (ADS)

    Hasançebi, O.; Kazemzadeh Azad, S.

    2014-01-01

    This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.

  4. How quantum is the big bang?

    PubMed

    Bojowald, Martin

    2008-06-06

    When quantum gravity is used to discuss the big bang singularity, the most important, though rarely addressed, question is what role genuine quantum degrees of freedom play. Here, complete effective equations are derived for isotropic models with an interacting scalar to all orders in the expansions involved. The resulting coupling terms show that quantum fluctuations do not affect the bounce much. Quantum correlations, however, do have an important role and could even eliminate the bounce. How quantum gravity regularizes the big bang depends crucially on properties of the quantum state.

  5. 76 FR 47141 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-04

    ....us , with the words Big Horn County RAC in the subject line. Facsimilies may be sent to 307-674-2668... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. [[Page 47142

  6. Big data science: A literature review of nursing research exemplars.

    PubMed

    Westra, Bonnie L; Sylvia, Martha; Weinfurter, Elizabeth F; Pruinelli, Lisiane; Park, Jung In; Dodd, Dianna; Keenan, Gail M; Senk, Patricia; Richesson, Rachel L; Baukner, Vicki; Cruz, Christopher; Gao, Grace; Whittenburg, Luann; Delaney, Connie W

    Big data and cutting-edge analytic methods in nursing research challenge nurse scientists to extend the data sources and analytic methods used for discovering and translating knowledge. The purpose of this study was to identify, analyze, and synthesize exemplars of big data nursing research applied to practice and disseminated in key nursing informatics, general biomedical informatics, and nursing research journals. A literature review of studies published between 2009 and 2015. There were 650 journal articles identified in 17 key nursing informatics, general biomedical informatics, and nursing research journals in the Web of Science database. After screening for inclusion and exclusion criteria, 17 studies published in 18 articles were identified as big data nursing research applied to practice. Nurses clearly are beginning to conduct big data research applied to practice. These studies represent multiple data sources and settings. Although numerous analytic methods were used, the fundamental issue remains to define the types of analyses consistent with big data analytic methods. There are needs to increase the visibility of big data and data science research conducted by nurse scientists, further examine the use of state of the science in data analytics, and continue to expand the availability and use of a variety of scientific, governmental, and industry data resources. A major implication of this literature review is whether nursing faculty and preparation of future scientists (PhD programs) are prepared for big data and data science. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Priming the Pump for Big Data at Sentara Healthcare.

    PubMed

    Kern, Howard P; Reagin, Michael J; Reese, Bertram S

    2016-01-01

    Today's healthcare organizations are facing significant demands with respect to managing population health, demonstrating value, and accepting risk for clinical outcomes across the continuum of care. The patient's environment outside the walls of the hospital and physician's office-and outside the electronic health record (EHR)-has a substantial impact on clinical care outcomes. The use of big data is key to understanding factors that affect the patient's health status and enhancing clinicians' ability to anticipate how the patient will respond to various therapies. Big data is essential to delivering sustainable, highquality, value-based healthcare, as well as to the success of new models of care such as clinically integrated networks (CINs) and accountable care organizations.Sentara Healthcare, based in Norfolk, Virginia, has been an early adopter of the technologies that have readied us for our big data journey: EHRs, telehealth-supported electronic intensive care units, and telehealth primary care support through MDLIVE. Although we would not say Sentara is at the cutting edge of the big data trend, it certainly is among the fast followers. Use of big data in healthcare is still at an early stage compared with other industries. Tools for data analytics are maturing, but traditional challenges such as heightened data security and limited human resources remain the primary focus for regional health systems to improve care and reduce costs. Sentara primarily makes actionable use of big data in our CIN, Sentara Quality Care Network, and at our health plan, Optima Health. Big data projects can be expensive, and justifying the expense organizationally has often been easier in times of crisis. We have developed an analytics strategic plan separate from but aligned with corporate system goals to ensure optimal investment and management of this essential asset.

  8. Exascale computing and big data

    DOE PAGES

    Reed, Daniel A.; Dongarra, Jack

    2015-06-25

    Scientific discovery and engineering innovation requires unifying traditionally separated high-performance computing and big data analytics. The tools and cultures of high-performance computing and big data analytics have diverged, to the detriment of both; unification is essential to address a spectrum of major research domains. The challenges of scale tax our ability to transmit data, compute complicated functions on that data, or store a substantial part of it; new approaches are required to meet these challenges. Finally, the international nature of science demands further development of advanced computer architectures and global standards for processing data, even as international competition complicates themore » openness of the scientific process.« less

  9. Exascale computing and big data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, Daniel A.; Dongarra, Jack

    Scientific discovery and engineering innovation requires unifying traditionally separated high-performance computing and big data analytics. The tools and cultures of high-performance computing and big data analytics have diverged, to the detriment of both; unification is essential to address a spectrum of major research domains. The challenges of scale tax our ability to transmit data, compute complicated functions on that data, or store a substantial part of it; new approaches are required to meet these challenges. Finally, the international nature of science demands further development of advanced computer architectures and global standards for processing data, even as international competition complicates themore » openness of the scientific process.« less

  10. Maps showing estimated sediment yield from coastal landslides and active slope distribution along the Big Sur coast, Monterey and San Luis Obispo Counties, California

    USGS Publications Warehouse

    Hapke, Cheryl J.; Green, Krystal R.; Dallas, Kate

    2004-01-01

    The 1982-83 and 1997-98 El Ni?os brought very high precipitation to California?s central coast; this precipitation resulted in raised groundwater levels, coastal flooding, and destabilized slopes throughout the region. Large landslides in the coastal mountains of Big Sur in Monterey and San Luis Obispo Counties blocked sections of California State Route 1, closing the road for months at a time. Large landslides such as these occur frequently in the winter months along the Big Sur coast due to the steep topography and weak bedrock. A large landslide in 1983 resulted in the closure of Highway 1 for over a year to repair the road and stabilize the slope. Resulting work from the 1983 landslide cost over $7 million and generated 30 million cubic yards of debris from landslide removal and excavations to re-establish the highway along the Big Sur coast. Before establishment of the Monterey Bay National Marine Sanctuary (MBNMS) in 1992, typical road opening measures involved disposal of some landslide material and excess material generated from slope stabilization onto the seaward side of the highway. It is likely that some or most of this disposed material, either directly or indirectly through subsequent erosion, was eventually transported downslope into the ocean. In addition to the landslides that initiate above the road, natural slope failures sometimes occur on the steep slopes below the road and thus deliver material to the base of the coastal mountains where it is eroded and dispersed by waves and nearshore currents. Any coastal-slope landslide, generated through natural or anthropogenic processes, can result in sediment entering the nearshore zone. The waters offshore of the Big Sur coast are part of the MBNMS. Since it was established in 1992, landslide-disposal practices came under question for two reasons. The U.S. Code of Federal Regulations, Title 15, Section 922.132 prohibits discharging or depositing, from beyond the boundary of the Sanctuary, any material

  11. Metadata mapping and reuse in caBIG.

    PubMed

    Kunz, Isaac; Lin, Ming-Chin; Frey, Lewis

    2009-02-05

    This paper proposes that interoperability across biomedical databases can be improved by utilizing a repository of Common Data Elements (CDEs), UML model class-attributes and simple lexical algorithms to facilitate the building domain models. This is examined in the context of an existing system, the National Cancer Institute (NCI)'s cancer Biomedical Informatics Grid (caBIG). The goal is to demonstrate the deployment of open source tools that can be used to effectively map models and enable the reuse of existing information objects and CDEs in the development of new models for translational research applications. This effort is intended to help developers reuse appropriate CDEs to enable interoperability of their systems when developing within the caBIG framework or other frameworks that use metadata repositories. The Dice (di-grams) and Dynamic algorithms are compared and both algorithms have similar performance matching UML model class-attributes to CDE class object-property pairs. With algorithms used, the baselines for automatically finding the matches are reasonable for the data models examined. It suggests that automatic mapping of UML models and CDEs is feasible within the caBIG framework and potentially any framework that uses a metadata repository. This work opens up the possibility of using mapping algorithms to reduce cost and time required to map local data models to a reference data model such as those used within caBIG. This effort contributes to facilitating the development of interoperable systems within caBIG as well as other metadata frameworks. Such efforts are critical to address the need to develop systems to handle enormous amounts of diverse data that can be leveraged from new biomedical methodologies.

  12. Extending Big-Five Theory into Childhood: A Preliminary Investigation into the Relationship between Big-Five Personality Traits and Behavior Problems in Children.

    ERIC Educational Resources Information Center

    Ehrler, David J.; McGhee, Ron L.; Evans, J. Gary

    1999-01-01

    Investigation conducted to link Big-Five personality traits with behavior problems identified in childhood. Results show distinct patterns of behavior problems associated with various personality characteristics. Preliminary data indicate that identifying Big-Five personality trait patterns may be a useful dimension of assessment for understanding…

  13. 77 FR 58542 - Federal Home Loan Bank Members Selected for Community Support Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-21

    ... Bank & Trust Pawleys Island South Carolina. Highlands Union Bank Abingdon Virginia. Burke & Herbert... Shenandoah Iowa. Pinnacle Bank Sioux City Sioux City Iowa. Community State Bank Spencer Iowa. MetaBank Storm...

  14. Will Big Data Mean the End of Privacy?

    ERIC Educational Resources Information Center

    Pence, Harry E.

    2015-01-01

    Big Data is currently a hot topic in the field of technology, and many campuses are considering the addition of this topic into their undergraduate courses. Big Data tools are not just playing an increasingly important role in many commercial enterprises; they are also combining with new digital devices to dramatically change privacy. This article…

  15. Big Earth Data Initiative: Metadata Improvement: Case Studies

    NASA Technical Reports Server (NTRS)

    Kozimor, John; Habermann, Ted; Farley, John

    2016-01-01

    Big Earth Data Initiative (BEDI) The Big Earth Data Initiative (BEDI) invests in standardizing and optimizing the collection, management and delivery of U.S. Government's civil Earth observation data to improve discovery, access use, and understanding of Earth observations by the broader user community. Complete and consistent standard metadata helps address all three goals.

  16. 36 CFR 7.41 - Big Bend National Park.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Big Bend National Park. 7.41 Section 7.41 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.41 Big Bend National Park. (a) Fishing; closed waters...

  17. 36 CFR 7.41 - Big Bend National Park.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Big Bend National Park. 7.41 Section 7.41 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.41 Big Bend National Park. (a) Fishing; closed waters...

  18. 36 CFR 7.41 - Big Bend National Park.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Big Bend National Park. 7.41 Section 7.41 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.41 Big Bend National Park. (a) Fishing; closed waters...

  19. 36 CFR 7.41 - Big Bend National Park.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Big Bend National Park. 7.41 Section 7.41 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.41 Big Bend National Park. (a) Fishing; closed waters...

  20. 36 CFR 7.41 - Big Bend National Park.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Big Bend National Park. 7.41 Section 7.41 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.41 Big Bend National Park. (a) Fishing; closed waters...

  1. The little sibling of the big rip singularity

    NASA Astrophysics Data System (ADS)

    Bouhmadi-López, Mariam; Errahmani, Ahmed; Martín-Moruno, Prado; Ouali, Taoufik; Tavakoli, Yaser

    2015-07-01

    In this paper, we present a new cosmological event, which we named the little sibling of the big rip. This event is much smoother than the big rip singularity. When the little sibling of the big rip is reached, the Hubble rate and the scale factor blow up, but the cosmic derivative of the Hubble rate does not. This abrupt event takes place at an infinite cosmic time where the scalar curvature explodes. We show that a doomsday à la little sibling of the big rip is compatible with an accelerating universe, indeed at present it would mimic perfectly a ΛCDM scenario. It turns out that, even though the event seems to be harmless as it takes place in the infinite future, the bound structures in the universe would be unavoidably destroyed on a finite cosmic time from now. The model can be motivated by considering that the weak energy condition should not be strongly violated in our universe, and it could give us some hints about the status of recently formulated nonlinear energy conditions.

  2. Differential Privacy Preserving in Big Data Analytics for Connected Health.

    PubMed

    Lin, Chi; Song, Zihao; Song, Houbing; Zhou, Yanhong; Wang, Yi; Wu, Guowei

    2016-04-01

    In Body Area Networks (BANs), big data collected by wearable sensors usually contain sensitive information, which is compulsory to be appropriately protected. Previous methods neglected privacy protection issue, leading to privacy exposure. In this paper, a differential privacy protection scheme for big data in body sensor network is developed. Compared with previous methods, this scheme will provide privacy protection with higher availability and reliability. We introduce the concept of dynamic noise thresholds, which makes our scheme more suitable to process big data. Experimental results demonstrate that, even when the attacker has full background knowledge, the proposed scheme can still provide enough interference to big sensitive data so as to preserve the privacy.

  3. Breaking BAD: A Data Serving Vision for Big Active Data

    PubMed Central

    Carey, Michael J.; Jacobs, Steven; Tsotras, Vassilis J.

    2017-01-01

    Virtually all of today’s Big Data systems are passive in nature. Here we describe a project to shift Big Data platforms from passive to active. We detail a vision for a scalable system that can continuously and reliably capture Big Data to enable timely and automatic delivery of new information to a large pool of interested users as well as supporting analyses of historical information. We are currently building a Big Active Data (BAD) system by extending an existing scalable open-source BDMS (AsterixDB) in this active direction. This first paper zooms in on the Data Serving piece of the BAD puzzle, including its key concepts and user model. PMID:29034377

  4. Using Big (and Critical) Data to Unmask Inequities in Community Colleges

    ERIC Educational Resources Information Center

    Rios-Aguilar, Cecilia

    2014-01-01

    This chapter presents various definitions of big data and examines some of the assumptions regarding the value and power of big data, especially as it relates to issues of equity in community colleges. Finally, this chapter ends with a discussion of the opportunities and challenges of using big data, critically, for institutional researchers.

  5. 50 CFR 86.11 - What does the national BIG Program do?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false What does the national BIG Program do? 86.11 Section 86.11 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE... GRANT (BIG) PROGRAM General Information About the Grant Program § 86.11 What does the national BIG...

  6. Data Confidentiality Challenges in Big Data Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yin, Jian; Zhao, Dongfang

    In this paper, we address the problem of data confidentiality in big data analytics. In many fields, much useful patterns can be extracted by applying machine learning techniques to big data. However, data confidentiality must be protected. In many scenarios, data confidentiality could well be a prerequisite for data to be shared. We present a scheme to provide provable secure data confidentiality and discuss various techniques to optimize performance of such a system.

  7. Plants of the Big Cypress National Preserve, Florida

    USGS Publications Warehouse

    Muss, J.D.; Austin, D.F.; Snyder, J.R.

    2003-01-01

    A new survey of the Big Cypress National Preserve shows that the vascular flora consists of 145 families and 851 species. Of these, 72 are listed by the State of Florida as endangered or threatened plants, while many others are on the margins of their ranges. The survey also shows 158 species of exotic plants within the Preserve, some of which imperil native species by competing with them. Finally, we compare the flora of the Big Cypress National Preserve with those of the nearby Fakahatchee Strand State Preserve and the Everglades National Park. Although Big Cypress is less than half the size of Everglades National Park, it has 90% of the native species richness (693 vs. 772).

  8. Adapting bioinformatics curricula for big data.

    PubMed

    Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. © The Author 2015. Published by Oxford University Press.

  9. Clinical research of traditional Chinese medicine in big data era.

    PubMed

    Zhang, Junhua; Zhang, Boli

    2014-09-01

    With the advent of big data era, our thinking, technology and methodology are being transformed. Data-intensive scientific discovery based on big data, named "The Fourth Paradigm," has become a new paradigm of scientific research. Along with the development and application of the Internet information technology in the field of healthcare, individual health records, clinical data of diagnosis and treatment, and genomic data have been accumulated dramatically, which generates big data in medical field for clinical research and assessment. With the support of big data, the defects and weakness may be overcome in the methodology of the conventional clinical evaluation based on sampling. Our research target shifts from the "causality inference" to "correlativity analysis." This not only facilitates the evaluation of individualized treatment, disease prediction, prevention and prognosis, but also is suitable for the practice of preventive healthcare and symptom pattern differentiation for treatment in terms of traditional Chinese medicine (TCM), and for the post-marketing evaluation of Chinese patent medicines. To conduct clinical studies involved in big data in TCM domain, top level design is needed and should be performed orderly. The fundamental construction and innovation studies should be strengthened in the sections of data platform creation, data analysis technology and big-data professionals fostering and training.

  10. SETI as a part of Big History

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2014-08-01

    Big History is an emerging academic discipline which examines history scientifically from the Big Bang to the present. It uses a multidisciplinary approach based on combining numerous disciplines from science and the humanities, and explores human existence in the context of this bigger picture. It is taught at some universities. In a series of recent papers ([11] through [15] and [17] through [18]) and in a book [16], we developed a new mathematical model embracing Darwinian Evolution (RNA to Humans, see, in particular, [17] and Human History (Aztecs to USA, see [16]) and then we extrapolated even that into the future up to ten million years (see 18), the minimum time requested for a civilization to expand to the whole Milky Way (Fermi paradox). In this paper, we further extend that model in the past so as to let it start at the Big Bang (13.8 billion years ago) thus merging Big History, Evolution on Earth and SETI (the modern Search for ExtraTerrestrial Intelligence) into a single body of knowledge of a statistical type. Our idea is that the Geometric Brownian Motion (GBM), so far used as the key stochastic process of financial mathematics (Black-Sholes models and related 1997 Nobel Prize in Economics!) may be successfully applied to the whole of Big History. In particular, in this paper we derive Big History Theory based on GBMs: just as the GBM is the “movie” unfolding in time, so the Statistical Drake Equation is its “still picture”, static in time, and the GBM is the time-extension of the Drake Equation. Darwinian Evolution on Earth may be easily described as an increasing GBM in the number of living species on Earth over the last 3.5 billion years. The first of them was RNA 3.5 billion years ago, and now 50

  11. Research on Implementing Big Data: Technology, People, & Processes

    ERIC Educational Resources Information Center

    Rankin, Jenny Grant; Johnson, Margie; Dennis, Randall

    2015-01-01

    When many people hear the term "big data", they primarily think of a technology tool for the collection and reporting of data of high variety, volume, and velocity. However, the complexity of big data is not only the technology, but the supporting processes, policies, and people supporting it. This paper was written by three experts to…

  12. The Ethics of Big Data and Nursing Science.

    PubMed

    Milton, Constance L

    2017-10-01

    Big data is a scientific, social, and technological trend referring to the process and size of datasets available for analysis. Ethical implications arise as healthcare disciplines, including nursing, struggle over questions of informed consent, privacy, ownership of data, and its possible use in epistemology. The author offers straight-thinking possibilities for the use of big data in nursing science.

  13. A peek into the future of radiology using big data applications

    PubMed Central

    Kharat, Amit T.; Singhal, Shubham

    2017-01-01

    Big data is extremely large amount of data which is available in the radiology department. Big data is identified by four Vs – Volume, Velocity, Variety, and Veracity. By applying different algorithmic tools and converting raw data to transformed data in such large datasets, there is a possibility of understanding and using radiology data for gaining new knowledge and insights. Big data analytics consists of 6Cs – Connection, Cloud, Cyber, Content, Community, and Customization. The global technological prowess and per-capita capacity to save digital information has roughly doubled every 40 months since the 1980's. By using big data, the planning and implementation of radiological procedures in radiology departments can be given a great boost. Potential applications of big data in the future are scheduling of scans, creating patient-specific personalized scanning protocols, radiologist decision support, emergency reporting, virtual quality assurance for the radiologist, etc. Targeted use of big data applications can be done for images by supporting the analytic process. Screening software tools designed on big data can be used to highlight a region of interest, such as subtle changes in parenchymal density, solitary pulmonary nodule, or focal hepatic lesions, by plotting its multidimensional anatomy. Following this, we can run more complex applications such as three-dimensional multi planar reconstructions (MPR), volumetric rendering (VR), and curved planar reconstruction, which consume higher system resources on targeted data subsets rather than querying the complete cross-sectional imaging dataset. This pre-emptive selection of dataset can substantially reduce the system requirements such as system memory, server load and provide prompt results. However, a word of caution, “big data should not become “dump data” due to inadequate and poor analysis and non-structured improperly stored data. In the near future, big data can ring in the era of personalized

  14. A peek into the future of radiology using big data applications.

    PubMed

    Kharat, Amit T; Singhal, Shubham

    2017-01-01

    Big data is extremely large amount of data which is available in the radiology department. Big data is identified by four Vs - Volume, Velocity, Variety, and Veracity. By applying different algorithmic tools and converting raw data to transformed data in such large datasets, there is a possibility of understanding and using radiology data for gaining new knowledge and insights. Big data analytics consists of 6Cs - Connection, Cloud, Cyber, Content, Community, and Customization. The global technological prowess and per-capita capacity to save digital information has roughly doubled every 40 months since the 1980's. By using big data, the planning and implementation of radiological procedures in radiology departments can be given a great boost. Potential applications of big data in the future are scheduling of scans, creating patient-specific personalized scanning protocols, radiologist decision support, emergency reporting, virtual quality assurance for the radiologist, etc. Targeted use of big data applications can be done for images by supporting the analytic process. Screening software tools designed on big data can be used to highlight a region of interest, such as subtle changes in parenchymal density, solitary pulmonary nodule, or focal hepatic lesions, by plotting its multidimensional anatomy. Following this, we can run more complex applications such as three-dimensional multi planar reconstructions (MPR), volumetric rendering (VR), and curved planar reconstruction, which consume higher system resources on targeted data subsets rather than querying the complete cross-sectional imaging dataset. This pre-emptive selection of dataset can substantially reduce the system requirements such as system memory, server load and provide prompt results. However, a word of caution, "big data should not become "dump data" due to inadequate and poor analysis and non-structured improperly stored data. In the near future, big data can ring in the era of personalized and

  15. Technical challenges for big data in biomedicine and health: data sources, infrastructure, and analytics.

    PubMed

    Peek, N; Holmes, J H; Sun, J

    2014-08-15

    To review technical and methodological challenges for big data research in biomedicine and health. We discuss sources of big datasets, survey infrastructures for big data storage and big data processing, and describe the main challenges that arise when analyzing big data. The life and biomedical sciences are massively contributing to the big data revolution through secondary use of data that were collected during routine care and through new data sources such as social media. Efficient processing of big datasets is typically achieved by distributing computation over a cluster of computers. Data analysts should be aware of pitfalls related to big data such as bias in routine care data and the risk of false-positive findings in high-dimensional datasets. The major challenge for the near future is to transform analytical methods that are used in the biomedical and health domain, to fit the distributed storage and processing model that is required to handle big data, while ensuring confidentiality of the data being analyzed.

  16. Use of modular amphibious vehicles for conducting research in coastal zone

    NASA Astrophysics Data System (ADS)

    Zeziulin, Denis; Makarov, Vladimir; Belyaev, Alexander; Beresnev, Pavel; Kurkin, Andrey

    2016-04-01

    The project aims to create workable running systems of research complexes, moving along the bottom of coastal areas (in shallow waters) for investigation of waves, currents, sediment transport; investigation of ecosystems and biodiversity assessment of organisms; inspection and monitoring environmental conditions and anthropogenic load on nature; bathymetric studies. With all the variety of functional capabilities of modern robotic systems, possibilities of their application in the context of the study of coastal zones are extremely limited. Conducting research using aerial vehicles is limited to safety conditions of flight. Use of floating robotic systems in environmental monitoring and ecosystem research is only possible in conditions of relatively «soft» wave climate of the coastal zone. For these purposes, there are special amphibians such as remote-controlled vehicle Surf Rover [Daily, William R., Mark A. Johnson, and Daniel A. Oslecki. «Initial Development of an Amphibious ROV for Use in Big Surf.» Marine Technology Society 28.1 (1994): 3-10. Print.], mobile system MARC-1 [«The SPROV'er.» Florida Institute of Technology: Department of Marine and. Environmental Systems. Web. 05 May 2010.]. The paper describes methodological approaches to the selection of the design parameters of a new system.

  17. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach

    PubMed Central

    Cheung, Mike W.-L.; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists—and probably the most crucial one—is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study. PMID:27242639

  18. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach.

    PubMed

    Cheung, Mike W-L; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists-and probably the most crucial one-is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  19. Big Crater as Viewed by Pathfinder Lander

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The 'Big Crater' is actually a relatively small Martian crater to the southeast of the Mars Pathfinder landing site. It is 1500 meters (4900 feet) in diameter, or about the same size as Meteor Crater in Arizona. Superimposed on the rim of Big Crater (the central part of the rim as seen here) is a smaller crater nicknamed 'Rimshot Crater.' The distance to this smaller crater, and the nearest portion of the rim of Big Crater, is 2200 meters (7200 feet). To the right of Big Crater, south from the spacecraft, almost lost in the atmospheric dust 'haze,' is the large streamlined mountain nicknamed 'Far Knob.' This mountain is over 450 meters (1480 feet) tall, and is over 30 kilometers (19 miles) from the spacecraft. Another, smaller and closer knob, nicknamed 'Southeast Knob' can be seen as a triangular peak to the left of the flanks of the Big Crater rim. This knob is 21 kilometers (13 miles) southeast from the spacecraft.

    The larger features visible in this scene - Big Crater, Far Knob, and Southeast Knob - were discovered on the first panoramas taken by the IMP camera on the 4th of July, 1997, and subsequently identified in Viking Orbiter images taken over 20 years ago. The scene includes rocky ridges and swales or 'hummocks' of flood debris that range from a few tens of meters away from the lander to the distance of South Twin Peak. The largest rock in the nearfield, just left of center in the foreground, nicknamed 'Otter', is about 1.5 meters (4.9 feet) long and 10 meters (33 feet) from the spacecraft.

    This view of Big Crater was produced by combining 6 individual 'Superpan' scenes from the left and right eyes of the IMP camera. Each frame consists of 8 individual frames (left eye) and 7 frames (right eye) taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be.

    Mars Pathfinder is the second in NASA

  20. Scalability and Validation of Big Data Bioinformatics Software.

    PubMed

    Yang, Andrian; Troup, Michael; Ho, Joshua W K

    2017-01-01

    This review examines two important aspects that are central to modern big data bioinformatics analysis - software scalability and validity. We argue that not only are the issues of scalability and validation common to all big data bioinformatics analyses, they can be tackled by conceptually related methodological approaches, namely divide-and-conquer (scalability) and multiple executions (validation). Scalability is defined as the ability for a program to scale based on workload. It has always been an important consideration when developing bioinformatics algorithms and programs. Nonetheless the surge of volume and variety of biological and biomedical data has posed new challenges. We discuss how modern cloud computing and big data programming frameworks such as MapReduce and Spark are being used to effectively implement divide-and-conquer in a distributed computing environment. Validation of software is another important issue in big data bioinformatics that is often ignored. Software validation is the process of determining whether the program under test fulfils the task for which it was designed. Determining the correctness of the computational output of big data bioinformatics software is especially difficult due to the large input space and complex algorithms involved. We discuss how state-of-the-art software testing techniques that are based on the idea of multiple executions, such as metamorphic testing, can be used to implement an effective bioinformatics quality assurance strategy. We hope this review will raise awareness of these critical issues in bioinformatics.

  1. The National Institutes of Health's Big Data to Knowledge (BD2K) initiative: capitalizing on biomedical big data.

    PubMed

    Margolis, Ronald; Derr, Leslie; Dunn, Michelle; Huerta, Michael; Larkin, Jennie; Sheehan, Jerry; Guyer, Mark; Green, Eric D

    2014-01-01

    Biomedical research has and will continue to generate large amounts of data (termed 'big data') in many formats and at all levels. Consequently, there is an increasing need to better understand and mine the data to further knowledge and foster new discovery. The National Institutes of Health (NIH) has initiated a Big Data to Knowledge (BD2K) initiative to maximize the use of biomedical big data. BD2K seeks to better define how to extract value from the data, both for the individual investigator and the overall research community, create the analytic tools needed to enhance utility of the data, provide the next generation of trained personnel, and develop data science concepts and tools that can be made available to all stakeholders. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. Teaching Information & Technology Skills: The Big6[TM] in Secondary Schools.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.; Berkowitz, Robert E.

    This companion volume to a previous work focusing on the Big6 Approach in elementary schools provides secondary school classroom teachers, teacher-librarians, and technology teachers with the background and tools necessary to implement an integrated Big6 program. The first part of this book explains the Big6 approach and the rationale behind it.…

  3. The Big Fish

    ERIC Educational Resources Information Center

    DeLisle, Rebecca; Hargis, Jace

    2005-01-01

    The Killer Whale, Shamu jumps through hoops and splashes tourists in hopes for the big fish, not because of passion, desire or simply the enjoyment of doing so. What would happen if those fish were obsolete? Would this killer whale be able to find the passion to continue to entertain people? Or would Shamu find other exciting activities to do…

  4. Physical properties of superbulky lanthanide metallocenes: synthesis and extraordinary luminescence of [Eu(II)(Cp(BIG))2] (Cp(BIG) = (4-nBu-C6H4)5-cyclopentadienyl).

    PubMed

    Harder, Sjoerd; Naglav, Dominik; Ruspic, Christian; Wickleder, Claudia; Adlung, Matthias; Hermes, Wilfried; Eul, Matthias; Pöttgen, Rainer; Rego, Daniel B; Poineau, Frederic; Czerwinski, Kenneth R; Herber, Rolfe H; Nowik, Israel

    2013-09-09

    The superbulky deca-aryleuropocene [Eu(Cp(BIG))2], Cp(BIG) = (4-nBu-C6H4)5-cyclopentadienyl, was prepared by reaction of [Eu(dmat)2(thf)2], DMAT = 2-Me2N-α-Me3Si-benzyl, with two equivalents of Cp(BIG)H. Recrystallizyation from cold hexane gave the product with a surprisingly bright and efficient orange emission (45% quantum yield). The crystal structure is isomorphic to those of [M(Cp(BIG))2] (M = Sm, Yb, Ca, Ba) and shows the typical distortions that arise from Cp(BIG)⋅⋅⋅Cp(BIG) attraction as well as excessively large displacement parameter for the heavy Eu atom (U(eq) = 0.075). In order to gain information on the true oxidation state of the central metal in superbulky metallocenes [M(Cp(BIG))2] (M = Sm, Eu, Yb), several physical analyses have been applied. Temperature-dependent magnetic susceptibility data of [Yb(Cp(BIG))2] show diamagnetism, indicating stable divalent ytterbium. Temperature-dependent (151)Eu Mössbauer effect spectroscopic examination of [Eu(Cp(BIG))2] was examined over the temperature range 93-215 K and the hyperfine and dynamical properties of the Eu(II) species are discussed in detail. The mean square amplitude of vibration of the Eu atom as a function of temperature was determined and compared to the value extracted from the single-crystal X-ray data at 203 K. The large difference in these two values was ascribed to the presence of static disorder and/or the presence of low-frequency torsional and librational modes in [Eu(Cp(BIG))2]. X-ray absorbance near edge spectroscopy (XANES) showed that all three [Ln(Cp(BIG))2] (Ln = Sm, Eu, Yb) compounds are divalent. The XANES white-line spectra are at 8.3, 7.3, and 7.8 eV, for Sm, Eu, and Yb, respectively, lower than the Ln2O3 standards. No XANES temperature dependence was found from room temperature to 100 K. XANES also showed that the [Ln(Cp(BIG))2] complexes had less trivalent impurity than a [EuI2(thf)x] standard. The complex [Eu(Cp(BIG))2] shows already at room temperature

  5. Business and Science - Big Data, Big Picture

    NASA Astrophysics Data System (ADS)

    Rosati, A.

    2013-12-01

    Data Science is more than the creation, manipulation, and transformation of data. It is more than Big Data. The business world seems to have a hold on the term 'data science' and, for now, they define what it means. But business is very different than science. In this talk, I address how large datasets, Big Data, and data science are conceptually different in business and science worlds. I focus on the types of questions each realm asks, the data needed, and the consequences of findings. Gone are the days of datasets being created or collected to serve only one purpose or project. The trick with data reuse is to become familiar enough with a dataset to be able to combine it with other data and extract accurate results. As a Data Curator for the Advanced Cooperative Arctic Data and Information Service (ACADIS), my specialty is communication. Our team enables Arctic sciences by ensuring datasets are well documented and can be understood by reusers. Previously, I served as a data community liaison for the North American Regional Climate Change Assessment Program (NARCCAP). Again, my specialty was communicating complex instructions and ideas to a broad audience of data users. Before entering the science world, I was an entrepreneur. I have a bachelor's degree in economics and a master's degree in environmental social science. I am currently pursuing a Ph.D. in Geography. Because my background has embraced both the business and science worlds, I would like to share my perspectives on data, data reuse, data documentation, and the presentation or communication of findings. My experiences show that each can inform and support the other.

  6. Unsupervised Tensor Mining for Big Data Practitioners.

    PubMed

    Papalexakis, Evangelos E; Faloutsos, Christos

    2016-09-01

    Multiaspect data are ubiquitous in modern Big Data applications. For instance, different aspects of a social network are the different types of communication between people, the time stamp of each interaction, and the location associated to each individual. How can we jointly model all those aspects and leverage the additional information that they introduce to our analysis? Tensors, which are multidimensional extensions of matrices, are a principled and mathematically sound way of modeling such multiaspect data. In this article, our goal is to popularize tensors and tensor decompositions to Big Data practitioners by demonstrating their effectiveness, outlining challenges that pertain to their application in Big Data scenarios, and presenting our recent work that tackles those challenges. We view this work as a step toward a fully automated, unsupervised tensor mining tool that can be easily and broadly adopted by practitioners in academia and industry.

  7. [Big data analysis and evidence-based medicine: controversy or cooperation].

    PubMed

    Chen, Xinzu; Hu, Jiankun

    2016-01-01

    The development of evidence-based medicince should be an important milestone from the empirical medicine to the evidence-driving modern medicine. With the outbreak in biomedical data, the rising big data analysis can efficiently solve exploratory questions or decision-making issues in biomedicine and healthcare activities. The current problem in China is that big data analysis is still not well conducted and applied to deal with problems such as clinical decision-making, public health policy, and should not be a debate whether big data analysis can replace evidence-based medicine or not. Therefore, we should clearly understand, no matter whether evidence-based medicine or big data analysis, the most critical infrastructure must be the substantial work in the design, constructure and collection of original database in China.

  8. Circulating big endothelin-1: an active role in pulmonary thromboendarterectomy?

    PubMed

    Langer, Frank; Bauer, Michael; Tscholl, Dietmar; Schramm, Rene; Kunihara, Takashi; Lausberg, Henning; Georg, Thomas; Wilkens, Heinrike; Schäfers, Hans-Joachim

    2005-11-01

    Pulmonary thromboendarterectomy is an effective treatment for patients with chronic thromboembolic pulmonary hypertension. The early postoperative course may be associated with pulmonary vasoconstriction and profound systemic vasodilation. We investigated the potential involvement of endothelins in these hemodynamic alterations. Seventeen patients with chronic thromboembolic pulmonary hypertension (pulmonary vascular resistance, 1015 +/- 402 dyne x s x cm(-5) [mean +/- SD]) underwent pulmonary thromboendarterectomy with cardiopulmonary bypass and deep hypothermic circulatory arrest. Peripheral arterial blood samples were drawn before sternotomy, during cardiopulmonary bypass before and after deep hypothermic circulatory arrest, and 0, 8, 16, and 24 hours after surgery and were analyzed for big endothelin-1. The patients were divided into 2 groups according to whether their preoperative big endothelin-1 plasma level was above or below the cutoff point of 2.1 pg/mL, as determined by receiver operating characteristic curve analysis (group A, big endothelin-1 <2.1 pg/mL, n = 8; group B, big endothelin-1 > or =2.1 pg/mL, n = 9). Patients in group B, with higher preoperative big endothelin-1 levels (3.2 +/- 1.0 pg/mL vs 1.5 +/- 0.4 pg/mL; P < .001), were poorer operative candidates (preoperative mean pulmonary artery pressure, 51.3 +/- 7.1 mm Hg vs 43.6 +/- 6.2 mm Hg; P = .006) and had a poorer outcome (mean pulmonary artery pressure 24 hours after surgery, 32.6 +/- 9.5 mm Hg vs 21.8 +/- 6.2 mm Hg; P < .001). Positive correlations were found between preoperative big endothelin-1 levels and preoperative mean pulmonary artery pressure (r = 0.56; P = .02) as well as postoperative mean pulmonary artery pressure at 0 hours (r = 0.70; P = .002) and 24 hours (r = 0.63; P = .006) after surgery. Preoperative big endothelin-1 levels predicted outcome (postoperative mean pulmonary artery pressure at 24 hours after surgery) after pulmonary thromboendarterectomy (area under the

  9. Big data and ophthalmic research.

    PubMed

    Clark, Antony; Ng, Jonathon Q; Morlet, Nigel; Semmens, James B

    2016-01-01

    Large population-based health administrative databases, clinical registries, and data linkage systems are a rapidly expanding resource for health research. Ophthalmic research has benefited from the use of these databases in expanding the breadth of knowledge in areas such as disease surveillance, disease etiology, health services utilization, and health outcomes. Furthermore, the quantity of data available for research has increased exponentially in recent times, particularly as e-health initiatives come online in health systems across the globe. We review some big data concepts, the databases and data linkage systems used in eye research-including their advantages and limitations, the types of studies previously undertaken, and the future direction for big data in eye research. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Big I (I-40/I-25) reconstruction & ITS infrastructure.

    DOT National Transportation Integrated Search

    2010-04-20

    The New Mexico Department of Transportation (NMDOT) rebuilt the Big I interchange in Albuquerque to make it safer and more efficient and to provide better access. The Big I is where the Coronado Interstate (I-40) and the Pan American Freeway (I-25) i...

  11. Reconnaissance-level assessment of water and bottom-sediment quality, including pesticides and mercury, in Yankton Sioux Tribe wetlands, Charles Mix County, South Dakota, June-July 2005

    USGS Publications Warehouse

    Schaap, Bryan D.; Bartholomay, Roy C.

    2006-01-01

    During June and July 2005, water and bottom-sediment samples were collected from selected Yankton Sioux Tribe wetlands within the historic Reservation area of eastern Charles Mix County as part of a reconnaissance-level assessment by the U.S. Geological Survey and Yankton Sioux Tribe. The water samples were analyzed for pesticides and mercury species. In addition, the water samples were analyzed for physical properties and chemical constituents that might help further characterize the water quality of the wetlands. The bottom-sediment samples were analyzed for mercury species. During June 2005, water samples were collected from 19 wetlands and were analyzed for 61 widely used pesticide compounds. Many pesticides were not detected in any of the water samples and many others were detected only at low concentrations in a few of the samples. Thirteen pesticides were detected in water samples from at least one of the wetlands. Atrazine and de-ethyl atrazine were detected at each of the 19 wetlands. The minimum, maximum, and median dissolved atrazine concentrations were 0.056, 0.567, and 0.151 microgram per liter (?g/L), respectively. Four pesticides (alachlor, carbaryl, chlorpyrifos, and dicamba) were detected in only one wetland each. The number of pesticides detected in any of the 19 wetlands ranged from 3 to 8, with a median of 6. In addition to the results for this study, recent previous studies have frequently found atrazine in Lake Andes and the Missouri River, but none of the atrazine concentrations have been greater than 3 ?g/L, the U.S. Environmental Protection Agency's Maximum Contaminant Level for atrazine in drinking water. During June and July 2005, water and bottom-sediment samples were collected from 10 wetlands. Water samples from each of the wetlands were analyzed for major ions, organic carbon, and mercury species, and bottom-sediment samples were analyzed for mercury species. For the whole-water samples, the total mercury concentrations ranged from 1

  12. The Challenge of Handling Big Data Sets in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Autermann, Christian; Stasch, Christoph; Jirka, Simon

    2016-04-01

    More and more Sensor Web components are deployed in different domains such as hydrology, oceanography or air quality in order to make observation data accessible via the Web. However, besides variability of data formats and protocols in environmental applications, the fast growing volume of data with high temporal and spatial resolution is imposing new challenges for Sensor Web technologies when sharing observation data and metadata about sensors. Variability, volume and velocity are the core issues that are addressed by Big Data concepts and technologies. Most solutions in the geospatial sector focus on remote sensing and raster data, whereas big in-situ observation data sets relying on vector features require novel approaches. Hence, in order to deal with big data sets in infrastructures for observational data, the following questions need to be answered: 1. How can big heterogeneous spatio-temporal datasets be organized, managed, and provided to Sensor Web applications? 2. How can views on big data sets and derived information products be made accessible in the Sensor Web? 3. How can big observation data sets be processed efficiently? We illustrate these challenges with examples from the marine domain and outline how we address these challenges. We therefore show how big data approaches from mainstream IT can be re-used and applied to Sensor Web application scenarios.

  13. What's the Big Idea? Seeking to Top Apollo

    NASA Technical Reports Server (NTRS)

    Sherwood, Brent

    2012-01-01

    Human space flight has struggled to find its soul since Apollo. The astounding achievements of human space programs over the 40 years since Apollo have failed to be as iconic or central to society as in the 1960s. The paper proffers a way human space flight could again be associated with a societal Big Idea. It describes eight societal factors that have irrevocably changed since Apollo; then analyzes eight other factors that a forward HSF Big Idea would have to fit. The paper closes by assessing the four principal options for HSF futures against those eight factors. Robotic and human industrialization of geosynchronous orbit to provide unlimited, sustainable electrical power to Earth is found to be the best candidate for the next Big Idea.

  14. The effect of big endothelin-1 in the proximal tubule of the rat kidney

    PubMed Central

    Beara-Lasić, Lada; Knotek, Mladen; Čejvan, Kenan; Jakšić, Ozren; Lasić, Zoran; Skorić, Boško; Brkljačić, Vera; Banfić, Hrvoje

    1997-01-01

    An obligatory step in the biosynthesis of endothelin-1 (ET-1) is the conversion of its inactive precursor, big ET-1, into the mature form by the action of specific, phosphoramidon-sensitive, endothelin converting enzyme(s) (ECE). Disparate effects of big ET-1 and ET-1 on renal tubule function suggest that big ET-1 might directly influence renal tubule function. Therefore, the role of the enzymatic conversion of big ET-1 into ET-1 in eliciting the functional response (generation of 1,2-diacylglycerol) to big ET-1 was studied in the rat proximal tubules.In renal cortical slices incubated with big ET-1, pretreatment with phosphoramidon (an ECE inhibitor) reduced tissue immunoreactive ET-1 to a level similar to that of cortical tissue not exposed to big ET-1. This confirms the presence and effectiveness of ECE inhibition by phosphoramidon.In freshly isolated proximal tubule cells, big ET-1 stimulated the generation of 1,2-diacylglycerol (DAG) in a time- and dose-dependent manner. Neither phosphoramidon nor chymostatin, a chymase inhibitor, influenced the generation of DAG evoked by big ET-1.Big ET-1-dependent synthesis of DAG was found in the brush-border membrane. It was unaffected by BQ123, an ETA receptor antagonist, but was blocked by bosentan, an ETA,B-nonselective endothelin receptor antagonist.These results suggest that the proximal tubule is a site for the direct effect of big ET-1 in the rat kidney. The effect of big ET-1 is confined to the brush-border membrane of the proximal tubule, which may be the site of big ET-1-sensitive receptors. PMID:9051300

  15. Introducing the Big Knowledge to Use (BK2U) challenge.

    PubMed

    Perl, Yehoshua; Geller, James; Halper, Michael; Ochs, Christopher; Zheng, Ling; Kapusnik-Uner, Joan

    2017-01-01

    The purpose of the Big Data to Knowledge initiative is to develop methods for discovering new knowledge from large amounts of data. However, if the resulting knowledge is so large that it resists comprehension, referred to here as Big Knowledge (BK), how can it be used properly and creatively? We call this secondary challenge, Big Knowledge to Use. Without a high-level mental representation of the kinds of knowledge in a BK knowledgebase, effective or innovative use of the knowledge may be limited. We describe summarization and visualization techniques that capture the big picture of a BK knowledgebase, possibly created from Big Data. In this research, we distinguish between assertion BK and rule-based BK (rule BK) and demonstrate the usefulness of summarization and visualization techniques of assertion BK for clinical phenotyping. As an example, we illustrate how a summary of many intracranial bleeding concepts can improve phenotyping, compared to the traditional approach. We also demonstrate the usefulness of summarization and visualization techniques of rule BK for drug-drug interaction discovery. © 2016 New York Academy of Sciences.

  16. Introducing the Big Knowledge to Use (BK2U) challenge

    PubMed Central

    Perl, Yehoshua; Geller, James; Halper, Michael; Ochs, Christopher; Zheng, Ling; Kapusnik-Uner, Joan

    2016-01-01

    The purpose of the Big Data to Knowledge (BD2K) initiative is to develop methods for discovering new knowledge from large amounts of data. However, if the resulting knowledge is so large that it resists comprehension, referred to here as Big Knowledge (BK), how can it be used properly and creatively? We call this secondary challenge, Big Knowledge to Use (BK2U). Without a high-level mental representation of the kinds of knowledge in a BK knowledgebase, effective or innovative use of the knowledge may be limited. We describe summarization and visualization techniques that capture the big picture of a BK knowledgebase, possibly created from Big Data. In this research, we distinguish between assertion BK and rule-based BK and demonstrate the usefulness of summarization and visualization techniques of assertion BK for clinical phenotyping. As an example, we illustrate how a summary of many intracranial bleeding concepts can improve phenotyping, compared to the traditional approach. We also demonstrate the usefulness of summarization and visualization techniques of rule-based BK for drug–drug interaction discovery. PMID:27750400

  17. Big data and visual analytics in anaesthesia and health care.

    PubMed

    Simpao, A F; Ahumada, L M; Rehman, M A

    2015-09-01

    Advances in computer technology, patient monitoring systems, and electronic health record systems have enabled rapid accumulation of patient data in electronic form (i.e. big data). Organizations such as the Anesthesia Quality Institute and Multicenter Perioperative Outcomes Group have spearheaded large-scale efforts to collect anaesthesia big data for outcomes research and quality improvement. Analytics--the systematic use of data combined with quantitative and qualitative analysis to make decisions--can be applied to big data for quality and performance improvements, such as predictive risk assessment, clinical decision support, and resource management. Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces, and it can facilitate performance of cognitive activities involving big data. Ongoing integration of big data and analytics within anaesthesia and health care will increase demand for anaesthesia professionals who are well versed in both the medical and the information sciences. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Gender Differences in Personality across the Ten Aspects of the Big Five.

    PubMed

    Weisberg, Yanna J; Deyoung, Colin G; Hirsh, Jacob B

    2011-01-01

    This paper investigates gender differences in personality traits, both at the level of the Big Five and at the sublevel of two aspects within each Big Five domain. Replicating previous findings, women reported higher Big Five Extraversion, Agreeableness, and Neuroticism scores than men. However, more extensive gender differences were found at the level of the aspects, with significant gender differences appearing in both aspects of every Big Five trait. For Extraversion, Openness, and Conscientiousness, the gender differences were found to diverge at the aspect level, rendering them either small or undetectable at the Big Five level. These findings clarify the nature of gender differences in personality and highlight the utility of measuring personality at the aspect level.

  19. Gender Differences in Personality across the Ten Aspects of the Big Five

    PubMed Central

    Weisberg, Yanna J.; DeYoung, Colin G.; Hirsh, Jacob B.

    2011-01-01

    This paper investigates gender differences in personality traits, both at the level of the Big Five and at the sublevel of two aspects within each Big Five domain. Replicating previous findings, women reported higher Big Five Extraversion, Agreeableness, and Neuroticism scores than men. However, more extensive gender differences were found at the level of the aspects, with significant gender differences appearing in both aspects of every Big Five trait. For Extraversion, Openness, and Conscientiousness, the gender differences were found to diverge at the aspect level, rendering them either small or undetectable at the Big Five level. These findings clarify the nature of gender differences in personality and highlight the utility of measuring personality at the aspect level. PMID:21866227

  20. Survey of Cyber Crime in Big Data

    NASA Astrophysics Data System (ADS)

    Rajeswari, C.; Soni, Krishna; Tandon, Rajat

    2017-11-01

    Big data is like performing computation operations and database operations for large amounts of data, automatically from the data possessor’s business. Since a critical strategic offer of big data access to information from numerous and various areas, security and protection will assume an imperative part in big data research and innovation. The limits of standard IT security practices are notable, with the goal that they can utilize programming sending to utilize programming designers to incorporate pernicious programming in a genuine and developing risk in applications and working frameworks, which are troublesome. The impact gets speedier than big data. In this way, one central issue is that security and protection innovation are sufficient to share controlled affirmation for countless direct get to. For powerful utilization of extensive information, it should be approved to get to the information of that space or whatever other area from a space. For a long time, dependable framework improvement has arranged a rich arrangement of demonstrated ideas of demonstrated security to bargain to a great extent with the decided adversaries, however this procedure has been to a great extent underestimated as “needless excess” and sellers In this discourse, essential talks will be examined for substantial information to exploit this develop security and protection innovation, while the rest of the exploration difficulties will be investigated.

  1. Geophysical Tools for an Improved Hydrogeologic Conceptual Model of the Big Chino Sub-basin, Central Arizona

    NASA Astrophysics Data System (ADS)

    Macy, J. P.; Kennedy, J.

    2017-12-01

    Water users and managers who rely on the Verde River system and its aquifers for water supplies have an intrinsic interest in developing the best possible tools for assessing the effects of groundwater withdrawals. Past, present, and future groundwater withdrawals from the Big Chino sub-basin will affect groundwater levels in the Big Chino area and groundwater discharge at the headwaters of the Verde River, specifically at the Upper Verde Springs, which is believed to be a major discharge zone of groundwater from the sub-basin. The amount and timing of reduced discharge as base flow is a function of connections between hydrogeologic (aquifer) units, aquifer storage properties and transmissivity, and proximity of withdrawal locations to discharge areas. To better define the aquifer units and aquifer storage properties, the United States Geological Survey, Cities of Prescott and Prescott Valley, and Salt River Project have initiated an ongoing geophysical study using controlled-source audio-frequency magnetotellurics (CSAMT) and repeat microgravity methods. CSAMT, a high-energy electromagnetic method sensitive to lithologic variations between rock and sediment types, is useful for defining aquifers at depths of up to 600 meters. Visual display of CSAMT profiles using Google Earth is useful for understanding and visualizing the relation between geophysics and Big Chino Sub-basin hydrogeology. Initial results from repeat microgravity surveys, which measure changes in subsurface mass (and therefore aquifer storage) over time, reveal spatial variation in the relation between aquifer storage changes and groundwater level changes. This variation reflects different confining conditions and multiple aquifer systems in different parts of the aquifer. Information about confining conditions and multiple aquifers could improve numerical groundwater models and predictions of future groundwater-level and base-flow depletion.

  2. Moving Another Big Desk.

    ERIC Educational Resources Information Center

    Fawcett, Gay

    1996-01-01

    New ways of thinking about leadership require that leaders move their big desks and establish environments that encourage trust and open communication. Educational leaders must trust their colleagues to make wise choices. When teachers are treated democratically as leaders, classrooms will also become democratic learning organizations. (SM)

  3. HARNESSING BIG DATA FOR PRECISION MEDICINE: INFRASTRUCTURES AND APPLICATIONS.

    PubMed

    Yu, Kun-Hsing; Hart, Steven N; Goldfeder, Rachel; Zhang, Qiangfeng Cliff; Parker, Stephen C J; Snyder, Michael

    2017-01-01

    Precision medicine is a health management approach that accounts for individual differences in genetic backgrounds and environmental exposures. With the recent advancements in high-throughput omics profiling technologies, collections of large study cohorts, and the developments of data mining algorithms, big data in biomedicine is expected to provide novel insights into health and disease states, which can be translated into personalized disease prevention and treatment plans. However, petabytes of biomedical data generated by multiple measurement modalities poses a significant challenge for data analysis, integration, storage, and result interpretation. In addition, patient privacy preservation, coordination between participating medical centers and data analysis working groups, as well as discrepancies in data sharing policies remain important topics of discussion. In this workshop, we invite experts in omics integration, biobank research, and data management to share their perspectives on leveraging big data to enable precision medicine.Workshop website: http://tinyurl.com/PSB17BigData; HashTag: #PSB17BigData.

  4. The BigBOSS spectrograph

    NASA Astrophysics Data System (ADS)

    Jelinsky, Patrick; Bebek, Chris; Besuner, Robert; Carton, Pierre-Henri; Edelstein, Jerry; Lampton, Michael; Levi, Michael E.; Poppett, Claire; Prieto, Eric; Schlegel, David; Sholl, Michael

    2012-09-01

    BigBOSS is a proposed ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a 14,000 square degree galaxy and quasi-stellar object redshift survey. It consists of a 5,000- fiber-positioner focal plane feeding the spectrographs. The optical fibers are separated into ten 500 fiber slit heads at the entrance of ten identical spectrographs in a thermally insulated room. Each of the ten spectrographs has a spectral resolution (λ/Δλ) between 1500 and 4000 over a wavelength range from 360 - 980 nm. Each spectrograph uses two dichroic beam splitters to separate the spectrograph into three arms. It uses volume phase holographic (VPH) gratings for high efficiency and compactness. Each arm uses a 4096x4096 15 μm pixel charge coupled device (CCD) for the detector. We describe the requirements and current design of the BigBOSS spectrograph. Design trades (e.g. refractive versus reflective) and manufacturability are also discussed.

  5. COBE looks back to the Big Bang

    NASA Technical Reports Server (NTRS)

    Mather, John C.

    1993-01-01

    An overview is presented of NASA-Goddard's Cosmic Background Explorer (COBE), the first NASA satellite designed to observe the primeval explosion of the universe. The spacecraft carries three extremely sensitive IR and microwave instruments designed to measure the faint residual radiation from the Big Bang and to search for the formation of the first galaxies. COBE's far IR absolute spectrophotometer has shown that the Big Bang radiation has a blackbody spectrum, proving that there was no large energy release after the explosion.

  6. A survey on platforms for big data analytics.

    PubMed

    Singh, Dilpreet; Reddy, Chandan K

    The primary purpose of this paper is to provide an in-depth analysis of different platforms available for performing big data analytics. This paper surveys different hardware platforms available for big data analytics and assesses the advantages and drawbacks of each of these platforms based on various metrics such as scalability, data I/O rate, fault tolerance, real-time processing, data size supported and iterative task support. In addition to the hardware, a detailed description of the software frameworks used within each of these platforms is also discussed along with their strengths and drawbacks. Some of the critical characteristics described here can potentially aid the readers in making an informed decision about the right choice of platforms depending on their computational needs. Using a star ratings table, a rigorous qualitative comparison between different platforms is also discussed for each of the six characteristics that are critical for the algorithms of big data analytics. In order to provide more insights into the effectiveness of each of the platform in the context of big data analytics, specific implementation level details of the widely used k-means clustering algorithm on various platforms are also described in the form pseudocode.

  7. Viscoelastic coupling model of the San Andreas fault along the big bend, southern California

    USGS Publications Warehouse

    Savage, J.C.; Lisowski, M.

    1997-01-01

    The big bend segment of the San Andreas fault is the 300-km-long segment in southern California that strikes about N65??W, roughly 25?? counterclockwise from the local tangent to the small circle about the Pacific-North America pole of rotation. The broad distribution of deformation of trilateration networks along this segment implies a locking depth of at least 25 km as interpreted by the conventional model of strain accumulation (continuous slip on the fault below the locking depth at the rate of relative plate motion), whereas the observed seismicity and laboratory data on fault strength suggest that the locking depth should be no greater than 10 to 15 km. The discrepancy is explained by the viscoelastic coupling model which accounts for the viscoelastic response of the lower crust. Thus the broad distribution of deformation observed across the big bend segment can be largely associated with the San Andreas fault itself, not subsidiary faults distributed throughout the region. The Working Group on California Earthquake Probabilities [1995] in using geodetic data to estimate the seismic risk in southern California has assumed that strain accumulated off the San Andreas fault is released by earthquakes located off the San Andreas fault. Thus they count the San Andreas contribution to total seismic moment accumulation more than once, leading to an overestimate of the seismicity for magnitude 6 and greater earthquakes in their Type C zones.

  8. Big data: the management revolution.

    PubMed

    McAfee, Andrew; Brynjolfsson, Erik

    2012-10-01

    Big data, the authors write, is far more powerful than the analytics of the past. Executives can measure and therefore manage more precisely than ever before. They can make better predictions and smarter decisions. They can target more-effective interventions in areas that so far have been dominated by gut and intuition rather than by data and rigor. The differences between big data and analytics are a matter of volume, velocity, and variety: More data now cross the internet every second than were stored in the entire internet 20 years ago. Nearly real-time information makes it possible for a company to be much more agile than its competitors. And that information can come from social networks, images, sensors, the web, or other unstructured sources. The managerial challenges, however, are very real. Senior decision makers have to learn to ask the right questions and embrace evidence-based decision making. Organizations must hire scientists who can find patterns in very large data sets and translate them into useful business information. IT departments have to work hard to integrate all the relevant internal and external sources of data. The authors offer two success stories to illustrate how companies are using big data: PASSUR Aerospace enables airlines to match their actual and estimated arrival times. Sears Holdings directly analyzes its incoming store data to make promotions much more precise and faster.

  9. Hubble Spies Big Bang Frontiers

    NASA Image and Video Library

    2017-12-08

    Observations by the NASA/ESA Hubble Space Telescope have taken advantage of gravitational lensing to reveal the largest sample of the faintest and earliest known galaxies in the universe. Some of these galaxies formed just 600 million years after the big bang and are fainter than any other galaxy yet uncovered by Hubble. The team has determined for the first time with some confidence that these small galaxies were vital to creating the universe that we see today. An international team of astronomers, led by Hakim Atek of the Ecole Polytechnique Fédérale de Lausanne, Switzerland, has discovered over 250 tiny galaxies that existed only 600-900 million years after the big bang— one of the largest samples of dwarf galaxies yet to be discovered at these epochs. The light from these galaxies took over 12 billion years to reach the telescope, allowing the astronomers to look back in time when the universe was still very young. Read more: www.nasa.gov/feature/goddard/hubble-spies-big-bang-frontiers Credit: NASA/ESA NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  10. Root Apex Transition Zone As Oscillatory Zone

    PubMed Central

    Baluška, František; Mancuso, Stefano

    2013-01-01

    Root apex of higher plants shows very high sensitivity to environmental stimuli. The root cap acts as the most prominent plant sensory organ; sensing diverse physical parameters such as gravity, light, humidity, oxygen, and critical inorganic nutrients. However, the motoric responses to these stimuli are accomplished in the elongation region. This spatial discrepancy was solved when we have discovered and characterized the transition zone which is interpolated between the apical meristem and the subapical elongation zone. Cells of this zone are very active in the cytoskeletal rearrangements, endocytosis and endocytic vesicle recycling, as well as in electric activities. Here we discuss the oscillatory nature of the transition zone which, together with several other features of this zone, suggest that it acts as some kind of command center. In accordance with the early proposal of Charles and Francis Darwin, cells of this root zone receive sensory information from the root cap and instruct the motoric responses of cells in the elongation zone. PMID:24106493

  11. "Small Steps, Big Rewards": You Can Prevent Type 2 Diabetes

    MedlinePlus

    ... Home Current Issue Past Issues Special Section "Small Steps, Big Rewards": You Can Prevent Type 2 Diabetes ... onset. Those are the basic facts of "Small Steps. Big Rewards: Prevent type 2 Diabetes," created by ...

  12. 'Big data' in mental health research: current status and emerging possibilities.

    PubMed

    Stewart, Robert; Davis, Katrina

    2016-08-01

    'Big data' are accumulating in a multitude of domains and offer novel opportunities for research. The role of these resources in mental health investigations remains relatively unexplored, although a number of datasets are in use and supporting a range of projects. We sought to review big data resources and their use in mental health research to characterise applications to date and consider directions for innovation in future. A narrative review. Clear disparities were evident in geographic regions covered and in the disorders and interventions receiving most attention. We discuss the strengths and weaknesses of the use of different types of data and the challenges of big data in general. Current research output from big data is still predominantly determined by the information and resources available and there is a need to reverse the situation so that big data platforms are more driven by the needs of clinical services and service users.

  13. Big Data: You Are Adding to . . . and Using It

    ERIC Educational Resources Information Center

    Makela, Carole J.

    2016-01-01

    "Big data" prompts a whole lexicon of terms--data flow; analytics; data mining; data science; smart you name it (cars, houses, cities, wearables, etc.); algorithms; learning analytics; predictive analytics; data aggregation; data dashboards; digital tracks; and big data brokers. New terms are being coined frequently. Are we paying…

  14. The use of big data in transfusion medicine.

    PubMed

    Pendry, K

    2015-06-01

    'Big data' refers to the huge quantities of digital information now available that describe much of human activity. The science of data management and analysis is rapidly developing to enable organisations to convert data into useful information and knowledge. Electronic health records and new developments in Pathology Informatics now support the collection of 'big laboratory and clinical data', and these digital innovations are now being applied to transfusion medicine. To use big data effectively, we must address concerns about confidentiality and the need for a change in culture and practice, remove barriers to adopting common operating systems and data standards and ensure the safe and secure storage of sensitive personal information. In the UK, the aim is to formulate a single set of data and standards for communicating test results and so enable pathology data to contribute to national datasets. In transfusion, big data has been used for benchmarking, detection of transfusion-related complications, determining patterns of blood use and definition of blood order schedules for surgery. More generally, rapidly available information can monitor compliance with key performance indicators for patient blood management and inventory management leading to better patient care and reduced use of blood. The challenges of enabling reliable systems and analysis of big data and securing funding in the restrictive financial climate are formidable, but not insurmountable. The promise is that digital information will soon improve the implementation of best practice in transfusion medicine and patient blood management globally. © 2015 British Blood Transfusion Society.

  15. Statistical methods and computing for big data.

    PubMed

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing; Yan, Jun

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay.

  16. Big data, smart cities and city planning

    PubMed Central

    2013-01-01

    I define big data with respect to its size but pay particular attention to the fact that the data I am referring to is urban data, that is, data for cities that are invariably tagged to space and time. I argue that this sort of data are largely being streamed from sensors, and this represents a sea change in the kinds of data that we have about what happens where and when in cities. I describe how the growth of big data is shifting the emphasis from longer term strategic planning to short-term thinking about how cities function and can be managed, although with the possibility that over much longer periods of time, this kind of big data will become a source for information about every time horizon. By way of conclusion, I illustrate the need for new theory and analysis with respect to 6 months of smart travel card data of individual trips on Greater London’s public transport systems. PMID:29472982

  17. Statistical methods and computing for big data

    PubMed Central

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593

  18. Big data in food safety: An overview.

    PubMed

    Marvin, Hans J P; Janssen, Esmée M; Bouzembrak, Yamine; Hendriksen, Peter J M; Staats, Martijn

    2017-07-24

    Technology is now being developed that is able to handle vast amounts of structured and unstructured data from diverse sources and origins. These technologies are often referred to as big data, and open new areas of research and applications that will have an increasing impact in all sectors of our society. In this paper we assessed to which extent big data is being applied in the food safety domain and identified several promising trends. In several parts of the world, governments stimulate the publication on internet of all data generated in public funded research projects. This policy opens new opportunities for stakeholders dealing with food safety to address issues which were not possible before. Application of mobile phones as detection devices for food safety and the use of social media as early warning of food safety problems are a few examples of the new developments that are possible due to big data.

  19. bwtool: a tool for bigWig files

    PubMed Central

    Pohl, Andy; Beato, Miguel

    2014-01-01

    BigWig files are a compressed, indexed, binary format for genome-wide signal data for calculations (e.g. GC percent) or experiments (e.g. ChIP-seq/RNA-seq read depth). bwtool is a tool designed to read bigWig files rapidly and efficiently, providing functionality for extracting data and summarizing it in several ways, globally or at specific regions. Additionally, the tool enables the conversion of the positions of signal data from one genome assembly to another, also known as ‘lifting’. We believe bwtool can be useful for the analyst frequently working with bigWig data, which is becoming a standard format to represent functional signals along genomes. The article includes supplementary examples of running the software. Availability and implementation: The C source code is freely available under the GNU public license v3 at http://cromatina.crg.eu/bwtool. Contact: andrew.pohl@crg.eu, andypohl@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24489365

  20. Big data, smart cities and city planning.

    PubMed

    Batty, Michael

    2013-11-01

    I define big data with respect to its size but pay particular attention to the fact that the data I am referring to is urban data, that is, data for cities that are invariably tagged to space and time. I argue that this sort of data are largely being streamed from sensors, and this represents a sea change in the kinds of data that we have about what happens where and when in cities. I describe how the growth of big data is shifting the emphasis from longer term strategic planning to short-term thinking about how cities function and can be managed, although with the possibility that over much longer periods of time, this kind of big data will become a source for information about every time horizon. By way of conclusion, I illustrate the need for new theory and analysis with respect to 6 months of smart travel card data of individual trips on Greater London's public transport systems.

  1. Five critical questions of scale for the coastal zone

    NASA Astrophysics Data System (ADS)

    Swaney, D. P.; Humborg, C.; Emeis, K.; Kannen, A.; Silvert, W.; Tett, P.; Pastres, R.; Solidoro, C.; Yamamuro, M.; Hénocque, Y.; Nicholls, R.

    2012-01-01

    Social and ecological systems around the world are becoming increasingly globalized. From the standpoint of understanding coastal ecosystem behavior, system boundaries are not sufficient to define causes of change. A flutter in the stock market in Tokyo or Hong Kong can affect salmon producers in Norway or farmers in Togo. The globalization of opportunistic species and the disempowerment of people trying to manage their own affairs on a local scale seem to coincide with the globalization of trade. Human-accelerated environmental change, including climate change, can exacerbate this sense of disenfranchisement. The structure and functioning of coastal ecosystems have been developed over thousands of years subject to environmental forces and constraints imposed mainly on local scales. However, phenomena that transcend these conventional scales have emerged with the explosion of human population, and especially with the rise of modern global culture. Here, we examine five broad questions of scale in the coastal zone: How big are coastal ecosystems and why should we care? Temporal scales of change in coastal waters and watersheds: Can we detect shifting baselines due to economic development and other drivers? Are footprints more important than boundaries? What makes a decision big? The tyranny of small decisions in coastal regions. Scales of complexity in coastal waters: the simple, the complicated or the complex? These questions do not have straightforward answers. There is no single "scale" for coastal ecosystems; their multiscale nature complicates our understanding and management of them. Coastal ecosystems depend on their watersheds as well as spatially-diffuse "footprints" associated with modern trade and material flows. Change occurs both rapidly and slowly on human time scales, and observing and responding to changes in coastal environments is a fundamental challenge. Apparently small human decisions collectively have potentially enormous consequences for

  2. Zone separator for multiple zone vessels

    DOEpatents

    Jones, John B.

    1983-02-01

    A solids-gas contact vessel, having two vertically disposed distinct reaction zones, includes a dynamic seal passing solids from an upper to a lower zone and maintaining a gas seal against the transfer of the separate treating gases from one zone to the other, and including a stream of sealing fluid at the seal.

  3. Acute Kidney Injury and Big Data.

    PubMed

    Sutherland, Scott M; Goldstein, Stuart L; Bagshaw, Sean M

    2018-01-01

    The recognition of a standardized, consensus definition for acute kidney injury (AKI) has been an important milestone in critical care nephrology, which has facilitated innovation in prevention, quality of care, and outcomes research among the growing population of hospitalized patients susceptible to AKI. Concomitantly, there have been substantial advances in "big data" technologies in medicine, including electronic health records (EHR), data registries and repositories, and data management and analytic methodologies. EHRs are increasingly being adopted, clinical informatics is constantly being refined, and the field of EHR-enabled care improvement and research has grown exponentially. While these fields have matured independently, integrating the two has the potential to redefine and integrate AKI-related care and research. AKI is an ideal condition to exploit big data health care innovation for several reasons: AKI is common, increasingly encountered in hospitalized settings, imposes meaningful risk for adverse events and poor outcomes, has incremental cost implications, and has been plagued by suboptimal quality of care. In this concise review, we discuss the potential applications of big data technologies, particularly modern EHR platforms and health data repositories, to transform our capacity for AKI prediction, detection, and care quality. © 2018 S. Karger AG, Basel.

  4. Entering the 'big data' era in medicinal chemistry: molecular promiscuity analysis revisited.

    PubMed

    Hu, Ye; Bajorath, Jürgen

    2017-06-01

    The 'big data' concept plays an increasingly important role in many scientific fields. Big data involves more than unprecedentedly large volumes of data that become available. Different criteria characterizing big data must be carefully considered in computational data mining, as we discuss herein focusing on medicinal chemistry. This is a scientific discipline where big data is beginning to emerge and provide new opportunities. For example, the ability of many drugs to specifically interact with multiple targets, termed promiscuity, forms the molecular basis of polypharmacology, a hot topic in drug discovery. Compound promiscuity analysis is an area that is much influenced by big data phenomena. Different results are obtained depending on chosen data selection and confidence criteria, as we also demonstrate.

  5. Comparative effectiveness research and big data: balancing potential with legal and ethical considerations.

    PubMed

    Gray, Elizabeth Alexandra; Thorpe, Jane Hyatt

    2015-01-01

    Big data holds big potential for comparative effectiveness research. The ability to quickly synthesize and use vast amounts of health data to compare medical interventions across settings of care, patient populations, payers and time will greatly inform efforts to improve quality, reduce costs and deliver more patient-centered care. However, the use of big data raises significant legal and ethical issues that may present barriers or limitations to the full potential of big data. This paper addresses the scope of some of these legal and ethical issues and how they may be managed effectively to fully realize the potential of big data.

  6. [Medical big data and precision medicine: prospects of epidemiology].

    PubMed

    Song, J; Hu, Y H

    2016-08-10

    Since the development of high-throughput technology, electronic medical record system and big data technology, the value of medical data has caused more attention. On the other hand, the proposal of Precision Medicine Initiative opens up the prospect for medical big data. As a Tool-related Discipline, Epidemiology is, focusing on exploitation the resources of existing big data and promoting the integration of translational research and knowledge to completely unlocking the "black box" of exposure-disease continuum. It also tries to accelerating the realization of the ultimate goal on precision medicine. The overall purpose, however is to translate the evidence from scientific research to improve the health of the people.

  7. Big bang photosynthesis and pregalactic nucleosynthesis of light elements

    NASA Technical Reports Server (NTRS)

    Audouze, J.; Lindley, D.; Silk, J.

    1985-01-01

    Two nonstandard scenarios for pregalactic synthesis of the light elements (H-2, He-3, He-4, and Li-7) are developed. Big bang photosynthesis occurs if energetic photons, produced by the decay of massive neutrinos or gravitinos, partially photodisintegrate He-4 (formed in the standard hot big bang) to produce H-2 and He-3. In this case, primordial nucleosynthesis no longer constrains the baryon density of the universe, or the number of neutrino species. Alternatively, one may dispense partially or completely with the hot big bang and produce the light elements by bombardment of primordial gas, provided that He-4 is synthesized by a later generation of massive stars.

  8. How Big is Earth?

    NASA Astrophysics Data System (ADS)

    Thurber, Bonnie B.

    2015-08-01

    How Big is Earth celebrates the Year of Light. Using only the sunlight striking the Earth and a wooden dowel, students meet each other and then measure the circumference of the earth. Eratosthenes did it over 2,000 years ago. In Cosmos, Carl Sagan shared the process by which Eratosthenes measured the angle of the shadow cast at local noon when sunlight strikes a stick positioned perpendicular to the ground. By comparing his measurement to another made a distance away, Eratosthenes was able to calculate the circumference of the earth. How Big is Earth provides an online learning environment where students do science the same way Eratosthenes did. A notable project in which this was done was The Eratosthenes Project, conducted in 2005 as part of the World Year of Physics; in fact, we will be drawing on the teacher's guide developed by that project.How Big Is Earth? expands on the Eratosthenes project by providing an online learning environment provided by the iCollaboratory, www.icollaboratory.org, where teachers and students from Sweden, China, Nepal, Russia, Morocco, and the United States collaborate, share data, and reflect on their learning of science and astronomy. They are sharing their information and discussing their ideas/brainstorming the solutions in a discussion forum. There is an ongoing database of student measurements and another database to collect data on both teacher and student learning from surveys, discussions, and self-reflection done online.We will share our research about the kinds of learning that takes place only in global collaborations.The entrance address for the iCollaboratory is http://www.icollaboratory.org.

  9. Big Data in Health: a Literature Review from the Year 2005.

    PubMed

    de la Torre Díez, Isabel; Cosgaya, Héctor Merino; Garcia-Zapirain, Begoña; López-Coronado, Miguel

    2016-09-01

    The information stored in healthcare systems has increased over the last ten years, leading it to be considered Big Data. There is a wealth of health information ready to be analysed. However, the sheer volume raises a challenge for traditional methods. The aim of this article is to conduct a cutting-edge study on Big Data in healthcare from 2005 to the present. This literature review will help researchers to know how Big Data has developed in the health industry and open up new avenues for research. Information searches have been made on various scientific databases such as Pubmed, Science Direct, Scopus and Web of Science for Big Data in healthcare. The search criteria were "Big Data" and "health" with a date range from 2005 to the present. A total of 9724 articles were found on the databases. 9515 articles were discarded as duplicates or for not having a title of interest to the study. 209 articles were read, with the resulting decision that 46 were useful for this study. 52.6 % of the articles used were found in Science Direct, 23.7 % in Pubmed, 22.1 % through Scopus and the remaining 2.6 % through the Web of Science. Big Data has undergone extremely high growth since 2011 and its use is becoming compulsory in developed nations and in an increasing number of developing nations. Big Data is a step forward and a cost reducer for public and private healthcare.

  10. Effect of fungicides on Wyoming big sagebrush seed germination

    Treesearch

    Robert D. Cox; Lance H. Kosberg; Nancy L. Shaw; Stuart P. Hardegree

    2011-01-01

    Germination tests of Wyoming big sagebrush (Artemisia tridentata Nutt. ssp. wyomingensis Beetle & Young [Asteraceae]) seeds often exhibit fungal contamination, but the use of fungicides should be avoided because fungicides may artificially inhibit germination. We tested the effect of seed-applied fungicides on germination of Wyoming big sagebrush at 2 different...

  11. AmeriFlux US-Rws Reynolds Creek Wyoming big sagebrush

    DOE Data Explorer

    Flerchinger, Gerald [USDA Agricultural Research Service

    2017-01-01

    This is the AmeriFlux version of the carbon flux data for the site US-Rws Reynolds Creek Wyoming big sagebrush. Site Description - The site is located on the USDA-ARS's Reynolds Creek Experimental Watershed. It is dominated by Wyoming big sagebrush on land managed by USDI Bureau of Land Management.

  12. Who's Afraid of the Big Black Man?

    ERIC Educational Resources Information Center

    Johnson, Jason Kyle

    2018-01-01

    This article examines the experiences of big Black men in both their personal and professional lives. Black men are often perceived as being aggressive, violent, and physically larger than their White counterparts. The negative perceptions of Black men, particularly big Black men, often leads to negative encounters with police, educators, and…

  13. Big Data for cardiology: novel discovery?

    PubMed

    Mayer-Schönberger, Viktor

    2016-03-21

    Big Data promises to change cardiology through a massive increase in the data gathered and analysed; but its impact goes beyond improving incrementally existing methods. The potential of comprehensive data sets for scientific discovery is examined, and its impact on the scientific method generally and cardiology in particular is posited, together with likely consequences for research and practice. Big Data in cardiology changes how new insights are being discovered. For it to flourish, significant modifications in the methods, structures, and institutions of the profession are necessary. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.

  14. Big Data Analytics in Chemical Engineering.

    PubMed

    Chiang, Leo; Lu, Bo; Castillo, Ivan

    2017-06-07

    Big data analytics is the journey to turn data into insights for more informed business and operational decisions. As the chemical engineering community is collecting more data (volume) from different sources (variety), this journey becomes more challenging in terms of using the right data and the right tools (analytics) to make the right decisions in real time (velocity). This article highlights recent big data advancements in five industries, including chemicals, energy, semiconductors, pharmaceuticals, and food, and then discusses technical, platform, and culture challenges. To reach the next milestone in multiplying successes to the enterprise level, government, academia, and industry need to collaboratively focus on workforce development and innovation.

  15. What is worse than the “big one”?

    USGS Publications Warehouse

    Kerr, R. A.

    1988-01-01

    The first thought in the minds of many residents of the city of Whittier when the first shock hit them was "Is this the big one?" the San Andreas' once-in-150-years great shaker? It might as well have been for Whittier, which is 20 kilometers east of downtown Los Angeles. The ground shook harder there this month than it will when the big one does strike the distant San Andreas, which lies 50 kilometers on the other side of the mountains. And this was only a moderate, magnitude 6.1 shock. Earthquake of magnitude 7 and large 30 times more powerful, could rupture faults beneath the feet of Angelenos at any time. The loss of life and destruction could exceed that caused by the big one. 

  16. Big endothelin changes the cellular miRNA environment in TMOb osteoblasts and increases mineralization.

    PubMed

    Johnson, Michael G; Kristianto, Jasmin; Yuan, Baozhi; Konicke, Kathryn; Blank, Robert

    2014-08-01

    Endothelin (ET1) promotes the growth of osteoblastic breast and prostate cancer metastases. Conversion of big ET1 to mature ET1, catalyzed primarily by endothelin converting enzyme 1 (ECE1), is necessary for ET1's biological activity. We previously identified the Ece1, locus as a positional candidate gene for a pleiotropic quantitative trait locus affecting femoral size, shape, mineralization, and biomechanical performance. We exposed TMOb osteoblasts continuously to 25 ng/ml big ET1. Cells were grown for 6 days in growth medium and then switched to mineralization medium for an additional 15 days with or without big ET1, by which time the TMOb cells form mineralized nodules. We quantified mineralization by alizarin red staining and analyzed levels of miRNAs known to affect osteogenesis. Micro RNA 126-3p was identified by search as a potential regulator of sclerostin (SOST) translation. TMOb cells exposed to big ET1 showed greater mineralization than control cells. Big ET1 repressed miRNAs targeting transcripts of osteogenic proteins. Big ET1 increased expression of miRNAs that target transcripts of proteins that inhibit osteogenesis. Big ET1 increased expression of 126-3p 121-fold versus control. To begin to assess the effect of big ET1 on SOST production we analyzed both SOST transcription and protein production with and without the presence of big ET1 demonstrating that transcription and translation were uncoupled. Our data show that big ET1 signaling promotes mineralization. Moreover, the results suggest that big ET1's osteogenic effects are potentially mediated through changes in miRNA expression, a previously unrecognized big ET1 osteogenic mechanism.

  17. Breaking Sound Barriers: New Perspectives on Effective Big Band Development and Rehearsal

    ERIC Educational Resources Information Center

    Greig, Jeremy; Lowe, Geoffrey

    2014-01-01

    Jazz big band is a common extra-curricular musical activity in Western Australian secondary schools. Jazz big band offers important fundamentals that can help expand a student's musical understanding. However, the teaching of conventions associated with big band jazz has often been haphazard and can be daunting and frightening, especially for…

  18. Big data in psychology: Introduction to the special issue.

    PubMed

    Harlow, Lisa L; Oswald, Frederick L

    2016-12-01

    The introduction to this special issue on psychological research involving big data summarizes the highlights of 10 articles that address a number of important and inspiring perspectives, issues, and applications. Four common themes that emerge in the articles with respect to psychological research conducted in the area of big data are mentioned, including: (a) The benefits of collaboration across disciplines, such as those in the social sciences, applied statistics, and computer science. Doing so assists in grounding big data research in sound theory and practice, as well as in affording effective data retrieval and analysis. (b) Availability of large data sets on Facebook, Twitter, and other social media sites that provide a psychological window into the attitudes and behaviors of a broad spectrum of the population. (c) Identifying, addressing, and being sensitive to ethical considerations when analyzing large data sets gained from public or private sources. (d) The unavoidable necessity of validating predictive models in big data by applying a model developed on 1 dataset to a separate set of data or hold-out sample. Translational abstracts that summarize the articles in very clear and understandable terms are included in Appendix A, and a glossary of terms relevant to big data research discussed in the articles is presented in Appendix B. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. The big five factors of personality and their relationship to personality disorders.

    PubMed

    Dyce, J A

    1997-10-01

    Articles examining the relationship between the Big Five factors of personality and personality disorders (PDs) are reviewed. A survey of these studies indicates that there is some agreement regarding the relationship between the Big Five and PDs. However, the level of agreement varies and may be a function of instrumentation, the method of report, or how data have been analyzed. Future research should consider the role of peer-ratings, examine the relationship between PDs and the first-order factors of the Big Five, consider dimensions over and above the Big Five as predictors of PDs.

  20. Isotopic data for Late Cretaceous intrusions and associated altered and mineralized rocks in the Big Belt Mountains, Montana

    USGS Publications Warehouse

    du Bray, Edward A.; Unruh, Daniel M.; Hofstra, Albert H.

    2017-03-07

    The quartz monzodiorite of Mount Edith and the concentrically zoned intrusive suite of Boulder Baldy constitute the principal Late Cretaceous igneous intrusions hosted by Mesoproterozoic sedimentary rocks of the Newland Formation in the Big Belt Mountains, Montana. These calc-alkaline plutonic masses are manifestations of subduction-related magmatism that prevailed along the western edge of North America during the Cretaceous. Radiogenic isotope data for neodymium, strontium, and lead indicate that the petrogenesis of the associated magmas involved a combination of (1) sources that were compositionally heterogeneous at the scale of the geographically restricted intrusive rocks in the Big Belt Mountains and (2) variable contamination by crustal assimilants also having diverse isotopic compositions. Altered and mineralized rocks temporally, spatially, and genetically related to these intrusions manifest at least two isotopically distinct mineralizing events, both of which involve major inputs from spatially associated Late Cretaceous igneous rocks. Alteration and mineralization of rock associated with the intrusive suite of Boulder Baldy requires a component characterized by significantly more radiogenic strontium than that characteristic of the associated igneous rocks. However, the source of such a component was not identified in the Big Belt Mountains. Similarly, altered and mineralized rocks associated with the quartz monzodiorite of Mount Edith include a component characterized by significantly more radiogenic strontium and lead, particularly as defined by 207Pb/204Pb values. The source of this component appears to be fluids that equilibrated with proximal Newland Formation rocks. Oxygen isotope data for rocks of the intrusive suite of Boulder Baldy are similar to those of subduction-related magmatism that include mantle-derived components; oxygen isotope data for altered and mineralized equivalents are slightly lighter.