Sample records for yields big dividends

  1. Quantized expected returns in terms of dividend yield at the money

    NASA Astrophysics Data System (ADS)

    Dieng, Lamine

    2011-03-01

    We use the Bachelier (additive model) and the Black-Scholes (multiplicative model) as our models for the stock price movement for an investor who has entered into an America call option contract. We assume the investor to pay certain dividend yield on the expected rate of returns from buying stocks. In this work, we also assume the stock price to be initially in the out of the money state and eventually will move up through at the money state to the deep in the money state where the expected future payoffs and returns are positive for the stock holder. We call a singularity point at the money because the expected payoff vanishes at this point. Then, using martingale, supermartingale and Markov theories we obtain the Bachelier-type of the Black-Scholes and the Black-Scholes equations which we hedge in the limit where the change of the expected payoff of the call option is extremely small. Hence, by comparison we obtain the time-independent Schroedinger equation in Quantum Mechanics. We solve completely the time independent Schroedinger equation for both models to obtain the expected rate of returns and the expected payoffs for the stock holder at the money. We find the expected rate of returns to be quantized in terms of the dividend yield.

  2. 12 CFR 931.4 - Dividends.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Dividends. 931.4 Section 931.4 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK RISK MANAGEMENT AND CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL STOCK § 931.4 Dividends. (a) In general. A Bank may pay dividends on Class A or...

  3. 12 CFR 931.4 - Dividends.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Dividends. 931.4 Section 931.4 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK RISK MANAGEMENT AND CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL STOCK § 931.4 Dividends. (a) In general. A Bank may pay dividends on Class A or...

  4. Triple dividends of water consumption charges in South Africa

    NASA Astrophysics Data System (ADS)

    Letsoalo, Anthony; Blignaut, James; de Wet, Theuns; de Wit, Martin; Hess, Sebastiaan; Tol, Richard S. J.; van Heerden, Jan

    2007-05-01

    The South African government is exploring ways to address water scarcity problems by introducing a water resource management charge on the quantity of water used in sectors such as irrigated agriculture, mining, and forestry. It is expected that a more efficient water allocation, lower use, and a positive impact on poverty can be achieved. This paper reports on the validity of these claims by applying a computable general equilibrium model to analyze the triple dividend of water consumption charges in South Africa: reduced water use, more rapid economic growth, and a more equal income distribution. It is shown that an appropriate budget-neutral combination of water charges, particularly on irrigated agriculture and coal mining, and reduced indirect taxes, particularly on food, would yield triple dividends, that is, less water use, more growth, and less poverty.

  5. Determinants of corporate dividend policy in Indonesia

    NASA Astrophysics Data System (ADS)

    Lestari, H. S.

    2018-01-01

    This study aims to investigate the determinants factors that effect the dividend policy. The sample used in this research is manufacture companies listed in Indonesia Stock Exchange (IDX) and the period 2011 - 2015. There are independent variables such as earning, cash flow, free cash flow, debt, growth opportunities, investment opportunities, firm size, largest shareholder, firm risk, lagged dividend and dividend policy used as dependent variable. The study examines a total of 32 manufacture companies. After analyzing the data using the program software Eviews 9.0 by multiples regression analysis reveal that earning, cash flow, free cash flow, firm size, and lagged dividend have significant effect on dividend policy, whereas debt, growth opportunities, investment opportunities, largest shareholder, and firm risk have no significant effect on dividend policy. The results of this study are expected to be implemented by the financial managers in improving corporate profits and basic information as return on investment decisions.

  6. 46 CFR 283.3 - Dividend policy criteria.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... amount up to 100 percent of retained earnings, unless there is an operating loss in the fiscal year to... proposed dividend, it may declare a dividend of up to 40 percent of prior years' earnings, less any... of the years included in the prior years' earnings calculation dividends were paid under the 100...

  7. 12 CFR 917.9 - Dividends.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Dividends. 917.9 Section 917.9 Banks and Banking FEDERAL HOUSING FINANCE BOARD GOVERNANCE AND MANAGEMENT OF THE FEDERAL HOME LOAN BANKS POWERS AND... chapter. Dividends on such capital stock shall be computed without preference. (b) A Bank's board of...

  8. 12 CFR 327.53 - Allocation and payment of dividends.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... period from zero to 100 percent. The 15-year period shall begin as if it had applied to a dividend based... dividends. (a)(1) The allocation of any dividend among insured depository institutions shall be based on the... following table, the part of a dividend allocated based upon an institution's 1996 assessment base share...

  9. 26 CFR 1.61-9 - Dividends.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... regulated investment companies, see sections 851 through 855, and the regulations thereunder. As to distributions made by real estate investment trusts, see sections 856 through 858, and the regulations... under section 37, relating to retirement income. (b) Dividends in kind; stock dividends; stock...

  10. Optimal dividends in the Brownian motion risk model with interest

    NASA Astrophysics Data System (ADS)

    Fang, Ying; Wu, Rong

    2009-07-01

    In this paper, we consider a Brownian motion risk model, and in addition, the surplus earns investment income at a constant force of interest. The objective is to find a dividend policy so as to maximize the expected discounted value of dividend payments. It is well known that optimality is achieved by using a barrier strategy for unrestricted dividend rate. However, ultimate ruin of the company is certain if a barrier strategy is applied. In many circumstances this is not desirable. This consideration leads us to impose a restriction on the dividend stream. We assume that dividends are paid to the shareholders according to admissible strategies whose dividend rate is bounded by a constant. Under this additional constraint, we show that the optimal dividend strategy is formed by a threshold strategy.

  11. 12 CFR Appendix A to Part 707 - Annual Percentage Yield Calculation

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... may or may not occur in the future. These formulas apply to both dividend-bearing and interest-bearing... by the formula shown below. Credit unions may calculate the annual percentage yield using projected... the formula, credit unions shall assume that all principal and dividends remain on deposit for the...

  12. 12 CFR Appendix A to Part 707 - Annual Percentage Yield Calculation

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... may or may not occur in the future. These formulas apply to both dividend-bearing and interest-bearing... by the formula shown below. Credit unions may calculate the annual percentage yield using projected... the formula, credit unions shall assume that all principal and dividends remain on deposit for the...

  13. 12 CFR Appendix A to Part 707 - Annual Percentage Yield Calculation

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... may or may not occur in the future. These formulas apply to both dividend-bearing and interest-bearing... by the formula shown below. Credit unions may calculate the annual percentage yield using projected... the formula, credit unions shall assume that all principal and dividends remain on deposit for the...

  14. 12 CFR Appendix A to Part 707 - Annual Percentage Yield Calculation

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... may or may not occur in the future. These formulas apply to both dividend-bearing and interest-bearing... by the formula shown below. Credit unions may calculate the annual percentage yield using projected... the formula, credit unions shall assume that all principal and dividends remain on deposit for the...

  15. On optimal dividends: From reflection to refraction

    NASA Astrophysics Data System (ADS)

    Gerber, Hans U.; Shiu, Elias S. W.

    2006-02-01

    The problem goes back to a paper that Bruno de Finetti presented to the International Congress of Actuaries in New York (1957). In a stock company that is involved in risky business, what is the optimal dividend strategy, that is, what is the strategy that maximizes the expectation of the discounted dividends (until possible ruin) to the shareholders? Jeanblanc-Picque and Shiryaev [Russian Math. Surveys 20 (1995) 257-277] and Asmussen and Taksar [Insurance: Math. Econom. 20 (1997) 1-15] solved the problem by modeling the income process of the company by a Wiener process and imposing the condition of a bounded dividend rate. Here, we present some down-to-earth calculations in this context.

  16. 26 CFR 1.305-1 - Stock dividends.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... exchange for its convertible preferred class B stock. Under the terms of the class B stock, its conversion... 26 Internal Revenue 4 2010-04-01 2010-04-01 false Stock dividends. 1.305-1 Section 1.305-1...) INCOME TAXES Effects on Recipients § 1.305-1 Stock dividends. (a) In general. Under section 305, a...

  17. 78 FR 73128 - Dividend Equivalents From Sources Within the United States

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-05

    ... Dividend Equivalents From Sources Within the United States AGENCY: Internal Revenue Service (IRS), Treasury... dividends, and the amount of the dividend equivalents. This information is required to establish whether a... valid control number assigned by the Office of Management and Budget. Books or records relating to a...

  18. 12 CFR 208.5 - Dividends and other distributions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... paid in the form of common stock. (c) Earnings limitations on payment of dividends. (1) A member bank... Condition and Income) during the current calendar year and the retained net income of the prior two calendar years, unless the dividend has been approved by the Board. (2) “Retained net income” in a calendar year...

  19. 18 CFR 367.4380 - Account 438, Dividends declared-common stock.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... GAS ACT Retained Earnings Accounts § 367.4380 Account 438, Dividends declared—common stock. (a) This account must include amounts declared payable out of retained earnings as dividends on actually...

  20. 29 CFR 4043.31 - Extraordinary dividend or stock redemption.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... TERMINATIONS REPORTABLE EVENTS AND CERTAIN OTHER NOTIFICATION REQUIREMENTS Post-Event Notice of Reportable Events § 4043.31 Extraordinary dividend or stock redemption. (a) Reportable event. A reportable event...) Extraordinary dividends and stock redemptions. The reportable event described in section 4043(c)(11) of ERISA...

  1. 26 CFR 514.2 - Dividends.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... resident of France when such dividend is so paid, or by a French corporation, shall not exceed 15 percent... such withholding agents to be a French corporation not having a permanent establishment in the United...

  2. 26 CFR 514.2 - Dividends.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... resident of France when such dividend is so paid, or by a French corporation, shall not exceed 15 percent... such withholding agents to be a French corporation not having a permanent establishment in the United...

  3. 26 CFR 514.2 - Dividends.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... resident of France when such dividend is so paid, or by a French corporation, shall not exceed 15 percent... such withholding agents to be a French corporation not having a permanent establishment in the United...

  4. 26 CFR 514.2 - Dividends.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... resident of France when such dividend is so paid, or by a French corporation, shall not exceed 15 percent... such withholding agents to be a French corporation not having a permanent establishment in the United...

  5. 26 CFR 1.243-3 - Certain dividends from foreign corporations.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 3 2011-04-01 2011-04-01 false Certain dividends from foreign corporations. 1...) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Special Deductions for Corporations § 1.243-3 Certain dividends from foreign corporations. (a) In general. (1) In determining the deduction provided in section...

  6. 26 CFR 1.243-3 - Certain dividends from foreign corporations.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 3 2013-04-01 2013-04-01 false Certain dividends from foreign corporations. 1...) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Special Deductions for Corporations § 1.243-3 Certain dividends from foreign corporations. (a) In general. (1) In determining the deduction provided in section...

  7. 26 CFR 1.243-3 - Certain dividends from foreign corporations.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 3 2012-04-01 2012-04-01 false Certain dividends from foreign corporations. 1...) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Special Deductions for Corporations § 1.243-3 Certain dividends from foreign corporations. (a) In general. (1) In determining the deduction provided in section...

  8. 26 CFR 1.243-3 - Certain dividends from foreign corporations.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 3 2014-04-01 2014-04-01 false Certain dividends from foreign corporations. 1...) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Special Deductions for Corporations § 1.243-3 Certain dividends from foreign corporations. (a) In general. (1) In determining the deduction provided in section...

  9. 26 CFR 1.243-3 - Certain dividends from foreign corporations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 3 2010-04-01 2010-04-01 false Certain dividends from foreign corporations. 1...) INCOME TAX (CONTINUED) INCOME TAXES Special Deductions for Corporations § 1.243-3 Certain dividends from foreign corporations. (a) In general. (1) In determining the deduction provided in section 243(a), section...

  10. 18 CFR 367.4190 - Account 419, Interest and dividend income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Account 419, Interest and dividend income. 367.4190 Section 367.4190 Conservation of Power and Water Resources FEDERAL..., advances, special deposits, tax refunds and all other interest-bearing assets, and dividends on stocks of...

  11. 18 CFR 367.4190 - Account 419, Interest and dividend income.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Account 419, Interest and dividend income. 367.4190 Section 367.4190 Conservation of Power and Water Resources FEDERAL..., advances, special deposits, tax refunds and all other interest-bearing assets, and dividends on stocks of...

  12. 26 CFR 1.561-2 - When dividends are considered paid.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... shareholder. A deduction for dividends paid during the taxable year will not be permitted unless the shareholder receives the dividend during the taxable year for which the deduction is claimed. See section 563... cover properly stamped and addressed to the shareholder at his last known address, at such time that in...

  13. Stochastic optimization algorithms for barrier dividend strategies

    NASA Astrophysics Data System (ADS)

    Yin, G.; Song, Q. S.; Yang, H.

    2009-01-01

    This work focuses on finding optimal barrier policy for an insurance risk model when the dividends are paid to the share holders according to a barrier strategy. A new approach based on stochastic optimization methods is developed. Compared with the existing results in the literature, more general surplus processes are considered. Precise models of the surplus need not be known; only noise-corrupted observations of the dividends are used. Using barrier-type strategies, a class of stochastic optimization algorithms are developed. Convergence of the algorithm is analyzed; rate of convergence is also provided. Numerical results are reported to demonstrate the performance of the algorithm.

  14. 26 CFR 1.243-1 - Deduction for dividends received by corporations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 3 2010-04-01 2010-04-01 false Deduction for dividends received by corporations... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Special Deductions for Corporations § 1.243-1 Deduction for dividends received by corporations. (a)(1) A corporation is allowed a deduction under section 243 for...

  15. 26 CFR 514.3 - Dividends received by addressee not actual owner.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... (CONTINUED) REGULATIONS UNDER TAX CONVENTIONS FRANCE Withholding of Tax § 514.3 Dividends received by... in France of any dividend, paid on or after January 1, 1957, from which United States tax at the... source. (2) Fiduciary or partnership. A fiduciary or a partnership with an address in France which...

  16. 26 CFR 514.2 - Dividends.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) REGULATIONS UNDER TAX CONVENTIONS FRANCE... resident of France when such dividend is so paid, or by a French corporation, shall not exceed 15 percent...) Thus, if a nonresident alien individual who is a resident of France performs personal services within...

  17. 26 CFR 509.108 - Dividends.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... SWITZERLAND General Income Tax § 509.108 Dividends. (a) General. (1) The rate of United States tax imposed by... nonresident alien individual who is a resident of Switzerland, or by a Swiss corporation or other entity... resident of Switzerland performs personal services within the United States during the taxable year, but...

  18. 75 FR 20384 - ABB, Inc., Including On-Site Leased Workers From Spherion Staffing, Dividend Staffing, Mystaff...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-19

    ...-Site Leased Workers From Spherion Staffing, Dividend Staffing, Mystaff, and Zero Chaos, Wichita Falls... from Spherion Staffing, Dividend Staffing, MyStaff, and Zero Chaos were employed on-site by the Wichita..., Dividend Staffing, MyStaff, and Zero Chaos working on-site at the Wichita Falls, Texas location of ABB, Inc...

  19. Convexity of Ruin Probability and Optimal Dividend Strategies for a General Lévy Process

    PubMed Central

    Yuen, Kam Chuen; Shen, Ying

    2015-01-01

    We consider the optimal dividends problem for a company whose cash reserves follow a general Lévy process with certain positive jumps and arbitrary negative jumps. The objective is to find a policy which maximizes the expected discounted dividends until the time of ruin. Under appropriate conditions, we use some recent results in the theory of potential analysis of subordinators to obtain the convexity properties of probability of ruin. We present conditions under which the optimal dividend strategy, among all admissible ones, takes the form of a barrier strategy. PMID:26351655

  20. Topics in Finance Part VII--Dividend Policy

    ERIC Educational Resources Information Center

    Laux, Judy

    2011-01-01

    This series inspects the major topics in finance, reviewing the roles of stockholder wealth maximization, the risk-return tradeoff, and agency conflicts. The current article, devoted to dividend policy, also reviews the topic as presented in textbooks and the literature.

  1. The perturbed Sparre Andersen model with a threshold dividend strategy

    NASA Astrophysics Data System (ADS)

    Gao, Heli; Yin, Chuancun

    2008-10-01

    In this paper, we consider a Sparre Andersen model perturbed by diffusion with generalized Erlang(n)-distributed inter-claim times and a threshold dividend strategy. Integro-differential equations with certain boundary conditions for the moment-generation function and the mth moment of the present value of all dividends until ruin are derived. We also derive integro-differential equations with boundary conditions for the Gerber-Shiu functions. The special case where the inter-claim times are Erlang(2) distributed and the claim size distribution is exponential is considered in some details.

  2. 26 CFR 1.1244(d)-3 - Stock dividend, recapitalizations, changes in name, etc.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... time of the issuance. In 1960, C purchases an additional 200 shares of such stock from another... will be treated as meeting such requirements. Assuming all the shares with respect to which the dividend is received have equal rights to dividends, such part is the number of shares which bears the same...

  3. A renewal jump-diffusion process with threshold dividend strategy

    NASA Astrophysics Data System (ADS)

    Li, Bo; Wu, Rong; Song, Min

    2009-06-01

    In this paper, we consider a jump-diffusion risk process with the threshold dividend strategy. Both the distributions of the inter-arrival times and the claims are assumed to be in the class of phase-type distributions. The expected discounted dividend function and the Laplace transform of the ruin time are discussed. Motivated by Asmussen [S. Asmussen, Stationary distributions for fluid flow models with or without Brownian noise, Stochastic Models 11 (1) (1995) 21-49], instead of studying the original process, we study the constructed fluid flow process and their closed-form formulas are obtained in terms of matrix expression. Finally, numerical results are provided to illustrate the computation.

  4. 26 CFR 1.243-4 - Qualifying dividends.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... section 243(b) (2) is effective for such taxable years. Since $10,000 of the February 1, 1966... exemption election is effective for such year, $10,000 of the distribution does not satisfy the condition... from Y's current year's earnings and profits (1969) $10,000 (b) Dividend from earnings and profits of Z...

  5. 26 CFR 1.243-4 - Qualifying dividends.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... effective for such taxable years. Since $10,000 of the February 1, 1966, distribution was made out of... effective for such year, $10,000 of the distribution does not satisfy the condition specified in... from Y's current year's earnings and profits (1969) $10,000 (b) Dividend from earnings and profits of Z...

  6. 26 CFR 1.860-1 - Deficiency dividends.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...) INCOME TAXES (CONTINUED) Real Estate Investment Trusts § 1.860-1 Deficiency dividends. Section 860 allows a qualified investment entity to be relieved from the payment of a deficiency in (or to be allowed a credit or refund of) certain taxes. “Qualified investment entity” is defined in section 860(b). The taxes...

  7. 26 CFR 1.860-1 - Deficiency dividends.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) INCOME TAXES (CONTINUED) Real Estate Investment Trusts § 1.860-1 Deficiency dividends. Section 860 allows a qualified investment entity to be relieved from the payment of a deficiency in (or to be allowed a credit or refund of) certain taxes. “Qualified investment entity” is defined in section 860(b). The taxes...

  8. 26 CFR 1.860-1 - Deficiency dividends.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...) INCOME TAXES (CONTINUED) Real Estate Investment Trusts § 1.860-1 Deficiency dividends. Section 860 allows a qualified investment entity to be relieved from the payment of a deficiency in (or to be allowed a credit or refund of) certain taxes. “Qualified investment entity” is defined in section 860(b). The taxes...

  9. 26 CFR 1.860-1 - Deficiency dividends.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...) INCOME TAXES (CONTINUED) Real Estate Investment Trusts § 1.860-1 Deficiency dividends. Section 860 allows a qualified investment entity to be relieved from the payment of a deficiency in (or to be allowed a credit or refund of) certain taxes. “Qualified investment entity” is defined in section 860(b). The taxes...

  10. 26 CFR 1.860-1 - Deficiency dividends.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) INCOME TAXES Real Estate Investment Trusts § 1.860-1 Deficiency dividends. Section 860 allows a qualified investment entity to be relieved from the payment of a deficiency in (or to be allowed a credit or refund of) certain taxes. “Qualified investment entity” is defined in section 860(b). The taxes referred to are those...

  11. 18 CFR 367.2380 - Account 238, Dividends declared.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Account 238, Dividends declared. 367.2380 Section 367.2380 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... POWER ACT AND NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO...

  12. 12 CFR 327.52 - Annual dividend determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the DIF reserve ratio as of December 31st of 2008 or any later year equals or exceeds 1.35 percent... dividend based upon the reserve ratio of the DIF as of December 31st of the preceding year, and the amount... ratio of the DIF equals or exceeds 1.35 percent of estimated insured deposits and does not exceed 1.50...

  13. Understanding China's Demographic Dividends and Labor Issue

    ERIC Educational Resources Information Center

    Peng, Xizhe

    2013-01-01

    One of the major concerns about the one-child policy is its negative impact on the current and future labor force in China. People have talked about the Lewis Turning Point and the end of demographic dividends. Some of these arguments, however, can be misleading. The working-age population (ages 15 to 59) can be treated as the potential labor…

  14. 26 CFR 1.78-1 - Dividends received from certain foreign corporations by certain domestic corporations choosing...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... paid by certain domestic corporations treated as a section 78 dividend. Any reduction under section 907... dividends received from certain foreign corporations, or increase the earnings and profits of the domestic... upon the gross-up under section 78, see paragraph (c) of § 1.963-4. For rules respecting the reduction...

  15. 26 CFR 1.855-1 - Dividends paid by regulated investment company after close of taxable year.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Real Estate Investment Trusts § 1.855-1 Dividends paid by regulated investment company after close of..., 1960, as amended by T.D. 6921, 32 FR 8757, June 20, 1967] Real Estate Investment Trusts ... 26 Internal Revenue 9 2010-04-01 2010-04-01 false Dividends paid by regulated investment company...

  16. Optimality of the barrier strategy in de Finetti's dividend problem for spectrally negative Lévy processes: An alternative approach

    NASA Astrophysics Data System (ADS)

    Yin, Chuancun; Wang, Chunwei

    2009-11-01

    The optimal dividend problem proposed in de Finetti [1] is to find the dividend-payment strategy that maximizes the expected discounted value of dividends which are paid to the shareholders until the company is ruined. Avram et al. [9] studied the case when the risk process is modelled by a general spectrally negative Lévy process and Loeffen [10] gave sufficient conditions under which the optimal strategy is of the barrier type. Recently Kyprianou et al. [11] strengthened the result of Loeffen [10] which established a larger class of Lévy processes for which the barrier strategy is optimal among all admissible ones. In this paper we use an analytical argument to re-investigate the optimality of barrier dividend strategies considered in the three recent papers.

  17. 26 CFR 1.1291-9 - Deemed dividend election.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... income as a dividend the shareholder's pro rata share of the post-1986 earnings and profits of the PFIC...) and (2) of this section. (2) Post-1986 earnings and profits defined—(i) In general. For purposes of this section, the term post-1986 earnings and profits means the undistributed earnings and profits...

  18. 26 CFR 1.1291-9 - Deemed dividend election.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... income as a dividend the shareholder's pro rata share of the post-1986 earnings and profits of the PFIC...) and (2) of this section. (2) Post-1986 earnings and profits defined—(i) In general. For purposes of this section, the term post-1986 earnings and profits means the undistributed earnings and profits...

  19. 26 CFR 1.855-1 - Dividends paid by regulated investment company after close of taxable year.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Companies and Real Estate Investment Trusts § 1.855-1 Dividends paid by regulated investment company after.... 6500, 25 FR 11910, Nov. 26, 1960, as amended by T.D. 6921, 32 FR 8757, June 20, 1967] Real Estate... 26 Internal Revenue 9 2011-04-01 2011-04-01 false Dividends paid by regulated investment company...

  20. 26 CFR 1.855-1 - Dividends paid by regulated investment company after close of taxable year.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Companies and Real Estate Investment Trusts § 1.855-1 Dividends paid by regulated investment company after.... 6500, 25 FR 11910, Nov. 26, 1960, as amended by T.D. 6921, 32 FR 8757, June 20, 1967] Real Estate... 26 Internal Revenue 9 2013-04-01 2013-04-01 false Dividends paid by regulated investment company...

  1. 26 CFR 1.855-1 - Dividends paid by regulated investment company after close of taxable year.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Companies and Real Estate Investment Trusts § 1.855-1 Dividends paid by regulated investment company after.... 6500, 25 FR 11910, Nov. 26, 1960, as amended by T.D. 6921, 32 FR 8757, June 20, 1967] Real Estate... 26 Internal Revenue 9 2014-04-01 2014-04-01 false Dividends paid by regulated investment company...

  2. 26 CFR 1.855-1 - Dividends paid by regulated investment company after close of taxable year.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Companies and Real Estate Investment Trusts § 1.855-1 Dividends paid by regulated investment company after.... 6500, 25 FR 11910, Nov. 26, 1960, as amended by T.D. 6921, 32 FR 8757, June 20, 1967] Real Estate... 26 Internal Revenue 9 2012-04-01 2012-04-01 false Dividends paid by regulated investment company...

  3. 12 CFR 225.103 - Bank holding company acquiring stock by dividends, stock splits or exercise of rights.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... dividends, stock splits or exercise of rights. 225.103 Section 225.103 Banks and Banking FEDERAL RESERVE... holding company acquiring stock by dividends, stock splits or exercise of rights. (a) The Board of... bank stock splits without the Board's prior approval, and whether such a company may exercise, without...

  4. 12 CFR 225.103 - Bank holding company acquiring stock by dividends, stock splits or exercise of rights.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... dividends, stock splits or exercise of rights. 225.103 Section 225.103 Banks and Banking FEDERAL RESERVE... holding company acquiring stock by dividends, stock splits or exercise of rights. (a) The Board of... bank stock splits without the Board's prior approval, and whether such a company may exercise, without...

  5. 12 CFR 225.103 - Bank holding company acquiring stock by dividends, stock splits or exercise of rights.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... dividends, stock splits or exercise of rights. 225.103 Section 225.103 Banks and Banking FEDERAL RESERVE... holding company acquiring stock by dividends, stock splits or exercise of rights. (a) The Board of... bank stock splits without the Board's prior approval, and whether such a company may exercise, without...

  6. The perturbed compound Poisson risk model with constant interest and a threshold dividend strategy

    NASA Astrophysics Data System (ADS)

    Gao, Shan; Liu, Zaiming

    2010-03-01

    In this paper, we consider the compound Poisson risk model perturbed by diffusion with constant interest and a threshold dividend strategy. Integro-differential equations with certain boundary conditions for the moment-generation function and the nth moment of the present value of all dividends until ruin are derived. We also derive integro-differential equations with boundary conditions for the Gerber-Shiu functions. The special case that the claim size distribution is exponential is considered in some detail.

  7. Do Yield and Quality of Big Bluestem and Switchgrass Feedstock Decline over Winter?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jane M. F.; Gresham, Garold L.

    Switchgrass (Panicum virgatum L.) and big bluestem (Andropogon gerdardii Vitman) are potential perennial bioenergy feedstocks. Feedstock storage limitations, labor constraints for harvest, and environmental benefits provided by perennials are rationales for developing localized perennial feedstock as an alternative or in conjunction with annual feedstocks (i.e., crop residues). Little information is available on yield, mineral, and thermochemical properties of native species as related to harvest time. The study’s objectives were to compare the feedstock quantity and quality between grasses harvested in the fall or the following spring. It was hypothesized that biomass yield may decline, but translocation and/or leaching of mineralsmore » from the feedstock would improve feedstock quality. Feedstock yield did not differ by crop, harvest time, or their interactions. Both grasses averaged 6.0 Mg ha-1 (fall) and 5.4 Mg ha-1 (spring) with similar high heating value (17.7 MJ kg-1). The K/(Ca + Mg) ratio, used as a quality indicator declined to below a 0.5 threshold, but energy yield (Megajoule per kilogram) decreased 13% by delaying harvest until spring. Only once during the four study-years were conditions ideal for early spring harvest, in contrast during another spring, very muddy conditions resulted in excessive soil contamination. Early spring harvest may be hampered by late snow, lodging, and muddy conditions that may delay or prevent harvest, and result in soil contamination of the feedstock. However, reducing slagging/fouling potential and the mass of mineral nutrients removed from the field without a dramatic loss in biomass or caloric content are reasons to delay harvest until spring.« less

  8. Portfolio Optimization with Stochastic Dividends and Stochastic Volatility

    ERIC Educational Resources Information Center

    Varga, Katherine Yvonne

    2015-01-01

    We consider an optimal investment-consumption portfolio optimization model in which an investor receives stochastic dividends. As a first problem, we allow the drift of stock price to be a bounded function. Next, we consider a stochastic volatility model. In each problem, we use the dynamic programming method to derive the Hamilton-Jacobi-Bellman…

  9. Improving sustainable seed yield in Wyoming big sagebrush

    Treesearch

    Jeremiah C. Armstrong

    2007-01-01

    As part of the Great Basin Restoration Initiative, the effects of browsing, competition removal, pruning, fertilization and seed collection methods on increasing seed production in Wyoming big sagebrush (Artemisia tridentata Nutt. spp wyomingensis Beetle & Young) were studied. Study sites were located in Idaho, Nevada, and Utah. A split-plot...

  10. 26 CFR 1.811-2 - Dividends to policyholders.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... such excess shall be a net decrease referred to in section 809(c)(2). (c) Reserves for dividends to... premiums (as defined in section 809(c) and paragraph (a)(1)(ii) of § 1.809-4). Thus, so-called excess... policyholders paid during the taxable year: (i) Increased by the excess of the amounts held as reserves for...

  11. 12 CFR 225.103 - Bank holding company acquiring stock by dividends, stock splits or exercise of rights.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... dividends, stock splits or exercise of rights. 225.103 Section 225.103 Banks and Banking FEDERAL RESERVE... § 225.103 Bank holding company acquiring stock by dividends, stock splits or exercise of rights. (a) The... participate in bank stock splits without the Board's prior approval, and whether such a company may exercise...

  12. 12 CFR 225.103 - Bank holding company acquiring stock by dividends, stock splits or exercise of rights.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... dividends, stock splits or exercise of rights. 225.103 Section 225.103 Banks and Banking FEDERAL RESERVE... § 225.103 Bank holding company acquiring stock by dividends, stock splits or exercise of rights. (a) The... participate in bank stock splits without the Board's prior approval, and whether such a company may exercise...

  13. Lithologic and hydraulic controls on network-scale variations in sediment yield: Big Wood and North Fork Big Lost Rivers, Idaho

    NASA Astrophysics Data System (ADS)

    Mueller, E. R.; Pitlick, J.; Smith, M. E.

    2008-12-01

    Channel morphology and sediment textures in streams and rivers are a product of the flux of sediment and water conveyed to channel networks. Differences in sediment supply between watersheds should thus be reflected by differences in channel and bed-material properties. In order to address this directly, field measurements of channel morphology, substrate lithology, and bed sediment textures were made at 35 sites distributed evenly across two adjacent watersheds in south-central Idaho, the Big Wood River (BW) and N. Fork Big Lost River (NBL). Measurements of sediment transport indicate a five-fold difference in sediment yields between these basins, despite their geographic proximity. Three dominant lithologic modes (an intrusive and extrusive volcanic suite and a sedimentary suite) exist in different proportions between these basins. The spatial distribution of lithologies exhibits a first-order control on the variation in sediment supply, bed sediment textures, and size distribution of the bed load at the basin outlet. Here we document the coupled hydraulic and sedimentologic structuring of these stream channel networks to differences in sediment supply. The results show that width and depth are remarkably similar between the two basins across a range in channel gradient and drainage area, with the primary difference being decreased bed armoring in the NBL. As a result, dimensionless shear stress (τ*) increases downstream in the NBL with an average value of 0.073, despite declining slope. The opposite is true in the BW where τ* averages 0.048. Lithologic characterization of the substrate indicates that much of the discrepancy in bed armoring can be attributed to an increasing downstream supply of resistant intrusive granitic rocks to the BW, whereas the NBL is dominated by erodible extrusive volcanic and sedimentary rocks. A simple modeling approach using an excess shear stress-based bed load transport equation and observed channel geometry shows that subtle

  14. 26 CFR 1.860-2 - Requirements for deficiency dividends.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... TAX (CONTINUED) INCOME TAXES (CONTINUED) Real Estate Investment Trusts § 1.860-2 Requirements for deficiency dividends. (a) In general—(1) Determination, etc. A qualified investment entity is allowed a... company taxable income,” “real estate investment trust taxable income,” and “capital gains dividends” in...

  15. 26 CFR 1.860-2 - Requirements for deficiency dividends.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... TAX (CONTINUED) INCOME TAXES (CONTINUED) Real Estate Investment Trusts § 1.860-2 Requirements for deficiency dividends. (a) In general—(1) Determination, etc. A qualified investment entity is allowed a... company taxable income,” “real estate investment trust taxable income,” and “capital gains dividends” in...

  16. 26 CFR 1.860-2 - Requirements for deficiency dividends.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... TAX (CONTINUED) INCOME TAXES (CONTINUED) Real Estate Investment Trusts § 1.860-2 Requirements for deficiency dividends. (a) In general—(1) Determination, etc. A qualified investment entity is allowed a... company taxable income,” “real estate investment trust taxable income,” and “capital gains dividends” in...

  17. 26 CFR 1.860-2 - Requirements for deficiency dividends.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... TAX (CONTINUED) INCOME TAXES (CONTINUED) Real Estate Investment Trusts § 1.860-2 Requirements for deficiency dividends. (a) In general—(1) Determination, etc. A qualified investment entity is allowed a... company taxable income,” “real estate investment trust taxable income,” and “capital gains dividends” in...

  18. 26 CFR 1.860-2 - Requirements for deficiency dividends.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... TAX (CONTINUED) INCOME TAXES Real Estate Investment Trusts § 1.860-2 Requirements for deficiency dividends. (a) In general—(1) Determination, etc. A qualified investment entity is allowed a deduction for a... income,” “real estate investment trust taxable income,” and “capital gains dividends” in sections 852(b...

  19. 12 CFR 327.53 - Allocation and payment of dividends.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... institution's 1996 assessment base share and the institution's eligible premium share. (2) As set forth in the... based upon an institution's eligible premium share shall increase steadily over the same fifteen-year... upon the reserve ratio at the end of 2006 and shall end with respect to any dividend based upon the...

  20. 26 CFR 1.858-1 - Dividends paid by a real estate investment trust after close of taxable year.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 9 2014-04-01 2014-04-01 false Dividends paid by a real estate investment trust..., DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Real Estate Investment Trusts § 1.858-1 Dividends paid by a real estate investment trust after close of taxable year. (a...

  1. 26 CFR 1.858-1 - Dividends paid by a real estate investment trust after close of taxable year.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 9 2010-04-01 2010-04-01 false Dividends paid by a real estate investment trust..., DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Real Estate Investment Trusts § 1.858-1 Dividends paid by a real estate investment trust after close of taxable year. (a) General rule...

  2. 26 CFR 1.858-1 - Dividends paid by a real estate investment trust after close of taxable year.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 9 2012-04-01 2012-04-01 false Dividends paid by a real estate investment trust..., DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Real Estate Investment Trusts § 1.858-1 Dividends paid by a real estate investment trust after close of taxable year. (a...

  3. 26 CFR 1.858-1 - Dividends paid by a real estate investment trust after close of taxable year.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 9 2013-04-01 2013-04-01 false Dividends paid by a real estate investment trust..., DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Real Estate Investment Trusts § 1.858-1 Dividends paid by a real estate investment trust after close of taxable year. (a...

  4. 26 CFR 1.858-1 - Dividends paid by a real estate investment trust after close of taxable year.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 9 2011-04-01 2011-04-01 false Dividends paid by a real estate investment trust..., DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Real Estate Investment Trusts § 1.858-1 Dividends paid by a real estate investment trust after close of taxable year. (a...

  5. 26 CFR 1.404(k)-1T - Questions and answers relating to the deductibility of certain dividend distributions. (Temporary)

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... deductibility of certain dividend distributions. (Temporary) 1.404(k)-1T Section 1.404(k)-1T Internal Revenue... (CONTINUED) Pension, Profit-Sharing, Stock Bonus Plans, Etc. § 1.404(k)-1T Questions and answers relating to the deductibility of certain dividend distributions. (Temporary) Q-1: What does section 404(k) provide...

  6. 26 CFR 1.404(k)-1T - Questions and answers relating to the deductibility of certain dividend distributions. (Temporary)

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... deductibility of certain dividend distributions. (Temporary) 1.404(k)-1T Section 1.404(k)-1T Internal Revenue... (CONTINUED) Pension, Profit-Sharing, Stock Bonus Plans, Etc. § 1.404(k)-1T Questions and answers relating to the deductibility of certain dividend distributions. (Temporary) Q-1: What does section 404(k) provide...

  7. 26 CFR 1.404(k)-1T - Questions and answers relating to the deductibility of certain dividend distributions. (Temporary)

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... deductibility of certain dividend distributions. (Temporary) 1.404(k)-1T Section 1.404(k)-1T Internal Revenue... (CONTINUED) Pension, Profit-Sharing, Stock Bonus Plans, Etc. § 1.404(k)-1T Questions and answers relating to the deductibility of certain dividend distributions. (Temporary) Q-1: What does section 404(k) provide...

  8. 26 CFR 1.404(k)-1T - Questions and answers relating to the deductibility of certain dividend distributions. (Temporary)

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... deductibility of certain dividend distributions. (Temporary) 1.404(k)-1T Section 1.404(k)-1T Internal Revenue... (CONTINUED) Pension, Profit-Sharing, Stock Bonus Plans, Etc. § 1.404(k)-1T Questions and answers relating to the deductibility of certain dividend distributions. (Temporary) Q-1: What does section 404(k) provide...

  9. 26 CFR 1.1362-8 - Dividends received from affiliated subsidiaries.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 1504(a)(2), the term passive investment income does not include dividends from the C corporation to the... the earnings and profits are derived from activities that would not produce passive investment income... active or passive earnings and profits—(1) In general. An S corporation may use any reasonable method to...

  10. 26 CFR 1.404(k)-1T - Questions and answers relating to the deductibility of certain dividend distributions. (Temporary)

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... deductibility of certain dividend distributions. (Temporary) 1.404(k)-1T Section 1.404(k)-1T Internal Revenue... Pension, Profit-Sharing, Stock Bonus Plans, Etc. § 1.404(k)-1T Questions and answers relating to the deductibility of certain dividend distributions. (Temporary) Q-1: What does section 404(k) provide? A-1: Section...

  11. 26 CFR 509.117 - Dividends and interest paid by a foreign corporation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... (CONTINUED) REGULATIONS UNDER TAX CONVENTIONS SWITZERLAND General Income Tax § 509.117 Dividends and interest... Switzerland, or by a Swiss corporation, shall not be included in gross income and shall be exempt from United...

  12. How Big Was It? Getting at Yield

    NASA Astrophysics Data System (ADS)

    Pasyanos, M.; Walter, W. R.; Ford, S. R.

    2013-12-01

    One of the most coveted pieces of information in the wake of a nuclear test is the explosive yield. Determining the yield from remote observations, however, is not necessarily a trivial thing. For instance, recorded observations of seismic amplitudes, used to estimate the yield, are significantly modified by the intervening media, which varies widely, and needs to be properly accounted for. Even after correcting for propagation effects such as geometrical spreading, attenuation, and station site terms, getting from the resulting source term to a yield depends on the specifics of the explosion source model, including material properties, and depth. Some formulas are based on assumptions of the explosion having a standard depth-of-burial and observed amplitudes can vary if the actual test is either significantly overburied or underburied. We will consider the complications and challenges of making these determinations using a number of standard, more traditional methods and a more recent method that we have developed using regional waveform envelopes. We will do this comparison for recent declared nuclear tests from the DPRK. We will also compare the methods using older explosions at the Nevada Test Site with announced yields, material and depths, so that actual performance can be measured. In all cases, we also strive to quantify realistic uncertainties on the yield estimation.

  13. Deriving the Dividend Discount Model in the Intermediate Microeconomics Class

    ERIC Educational Resources Information Center

    Norman, Stephen; Schlaudraff, Jonathan; White, Karianne; Wills, Douglas

    2013-01-01

    In this article, the authors show that the dividend discount model can be derived using the basic intertemporal consumption model that is introduced in a typical intermediate microeconomics course. This result will be of use to instructors who teach microeconomics to finance students in that it demonstrates the value of utility maximization in…

  14. Quality's Higher Education Dividends: Broadened Custodianship and Global Public Scholarship

    ERIC Educational Resources Information Center

    Jacobs, Gerrie J.

    2010-01-01

    This paper speculates on the possible contribution of the quality movement to higher education and the perceived dividends received from this, in general, over the past two decades but also, more specifically, with reference to the author's institution in South Africa. The first major quality contribution is a gradual broadening of higher…

  15. Will big data yield new mathematics? An evolving synergy with neuroscience.

    PubMed

    Feng, S; Holmes, P

    2016-06-01

    New mathematics has often been inspired by new insights into the natural world. Here we describe some ongoing and possible future interactions among the massive data sets being collected in neuroscience, methods for their analysis and mathematical models of the underlying, still largely uncharted neural substrates that generate these data. We start by recalling events that occurred in turbulence modelling when substantial space-time velocity field measurements and numerical simulations allowed a new perspective on the governing equations of fluid mechanics. While no analogous global mathematical model of neural processes exists, we argue that big data may enable validation or at least rejection of models at cellular to brain area scales and may illuminate connections among models. We give examples of such models and survey some relatively new experimental technologies, including optogenetics and functional imaging, that can report neural activity in live animals performing complex tasks. The search for analytical techniques for these data is already yielding new mathematics, and we believe their multi-scale nature may help relate well-established models, such as the Hodgkin-Huxley equations for single neurons, to more abstract models of neural circuits, brain areas and larger networks within the brain. In brief, we envisage a closer liaison, if not a marriage, between neuroscience and mathematics.

  16. Will big data yield new mathematics? An evolving synergy with neuroscience

    PubMed Central

    Feng, S.; Holmes, P.

    2016-01-01

    New mathematics has often been inspired by new insights into the natural world. Here we describe some ongoing and possible future interactions among the massive data sets being collected in neuroscience, methods for their analysis and mathematical models of the underlying, still largely uncharted neural substrates that generate these data. We start by recalling events that occurred in turbulence modelling when substantial space-time velocity field measurements and numerical simulations allowed a new perspective on the governing equations of fluid mechanics. While no analogous global mathematical model of neural processes exists, we argue that big data may enable validation or at least rejection of models at cellular to brain area scales and may illuminate connections among models. We give examples of such models and survey some relatively new experimental technologies, including optogenetics and functional imaging, that can report neural activity in live animals performing complex tasks. The search for analytical techniques for these data is already yielding new mathematics, and we believe their multi-scale nature may help relate well-established models, such as the Hodgkin–Huxley equations for single neurons, to more abstract models of neural circuits, brain areas and larger networks within the brain. In brief, we envisage a closer liaison, if not a marriage, between neuroscience and mathematics. PMID:27516705

  17. 26 CFR 1.245-1 - Dividends received from certain foreign corporations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... corporations. 1.245-1 Section 1.245-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Special Deductions for Corporations § 1.245-1 Dividends received from certain foreign corporations. (a) General rule. (1) A corporation is allowed a deduction...

  18. 26 CFR 1.265-3 - Nondeductibility of interest relating to exempt-interest dividends.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... paid or accrued on the indebtedness is multiplied by a fraction. The numerator of the fraction is the amount of exempt-interest dividends received by the shareholder. The denominator of the fraction is the...

  19. 26 CFR 1.246-1 - Deductions not allowed for dividends from certain corporations.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... the distribution is made or for its next preceding taxable year; or (d) A real estate investment trust... not allowable with respect to any dividend received from: (a) A corporation organized under the China...

  20. 26 CFR 1.246-1 - Deductions not allowed for dividends from certain corporations.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... the distribution is made or for its next preceding taxable year; or (d) A real estate investment trust... not allowable with respect to any dividend received from: (a) A corporation organized under the China...

  1. 26 CFR 1.246-1 - Deductions not allowed for dividends from certain corporations.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... the distribution is made or for its next preceding taxable year; or (d) A real estate investment trust... not allowable with respect to any dividend received from: (a) A corporation organized under the China...

  2. 26 CFR 1.246-1 - Deductions not allowed for dividends from certain corporations.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... the distribution is made or for its next preceding taxable year; or (d) A real estate investment trust... not allowable with respect to any dividend received from: (a) A corporation organized under the China...

  3. 26 CFR 1.246-1 - Deductions not allowed for dividends from certain corporations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... made or for its next preceding taxable year; or (d) A real estate investment trust which, for its... with respect to any dividend received from: (a) A corporation organized under the China Trade Act, 1922...

  4. Civil society development versus the peace dividend: international aid in the Wanni.

    PubMed

    Culbert, Vance

    2005-03-01

    Donors that provide aid to the Wanni region of Sri Lanka, which is controlled by the Liberation Tigers of Tamil Eelam (LTTE), are promoting initiatives that seek to advance the national peace process. Under the rubric of post-conflict reconstruction, the actions of political forces and structural factors have led to the prioritisation of two different approaches to peace-building: community capacity-building projects; and support for the 'peace dividend'. Both of these approaches face challenges. Cooperation with civil society actors is extremely difficult due to intimidation by the LTTE political authority and the authoritarian nature of its control. Peace-building successes with respect to the peace dividend are difficult to measure, and must be balanced against the negative effects of misdirected funds. Aid organisations must be careful not to consider the tasks of peacebuilding, humanitarian relief and community empowerment as either interchangeable or as mutually reinforcing endeavours.

  5. 77 FR 13968 - Dividend Equivalents From Sources Within the United States; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-08

    ...--INCOME TAXES 0 Paragraph 1. The authority citation for part 1 continues to read in part as follows... temporary regulations (TD 9572), relating to dividend equivalents from sources within the United States.... List of Subjects in 26 CFR Part 1 Income taxes, Reporting and recordkeeping requirements. Correction of...

  6. 13 CFR 107.1400 - Dividends or partnership distributions on 4 percent Preferred Securities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Dividends or partnership distributions on 4 percent Preferred Securities. 107.1400 Section 107.1400 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES SBA Financial Assistance for Licensees...

  7. 26 CFR 1.6042-2 - Returns of information as to dividends paid.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Internal Revenue Service office where such company's return is to be filed for the taxable year, a... 26 Internal Revenue 13 2010-04-01 2010-04-01 false Returns of information as to dividends paid. 1.6042-2 Section 1.6042-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY...

  8. 26 CFR 1.6042-3T - Dividends subject to reporting (temporary).

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 13 2014-04-01 2014-04-01 false Dividends subject to reporting (temporary). 1... guidance, see § 1.6042-3(b)(1)(v). (vi) If a foreign intermediary, as described in § 1.1441-1(c)(13), or a.... The applicability of this section expires on February 28, 2017. [T.D. 9658, 79 FR 12794, Mar. 6, 2014] ...

  9. 26 CFR 1.247-1 - Deduction for dividends paid on preferred stock of public utilities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... consumers as are the rates within the regulated territory. (c) Preferred stock. (1) For the purposes of... distributions, and payable in preference to the payment of dividends on other stock, and (iii) the rate of...

  10. 26 CFR 1.381(c)(17)-1 - Deficiency dividend of personal holding company.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 4 2010-04-01 2010-04-01 false Deficiency dividend of personal holding company. 1.381(c)(17)-1 Section 1.381(c)(17)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Insolvency Reorganizations § 1.381(c)(17)-1...

  11. Epidemiology in the Era of Big Data

    PubMed Central

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-01-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called ‘3 Vs’: variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that, while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field’s future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future. PMID:25756221

  12. 26 CFR 1.1297-3 - Deemed sale or deemed dividend election by a U.S. person that is a shareholder of a section 1297...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... dividend election may be made by a shareholder whose pro rata share of the post-1986 earnings and profits... section 1297(e) PFIC shall include in income as a dividend its pro rata share of the post-1986 earnings... corporation thereafter qualifies as a PFIC under section 1297(a). (3) Post-1986 earnings and profits defined...

  13. 26 CFR 1.1297-3 - Deemed sale or deemed dividend election by a U.S. person that is a shareholder of a section 1297...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... dividend election may be made by a shareholder whose pro rata share of the post-1986 earnings and profits... section 1297(e) PFIC shall include in income as a dividend its pro rata share of the post-1986 earnings... corporation thereafter qualifies as a PFIC under section 1297(a). (3) Post-1986 earnings and profits defined...

  14. 26 CFR 1.854-1 - Limitations applicable to dividends received from regulated investment company.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... from regulated investment company. 1.854-1 Section 1.854-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Regulated Investment Companies and Real Estate Investment Trusts § 1.854-1 Limitations applicable to dividends received from...

  15. 26 CFR 1.854-1 - Limitations applicable to dividends received from regulated investment company.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... from regulated investment company. 1.854-1 Section 1.854-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Regulated Investment Companies and Real Estate Investment Trusts § 1.854-1 Limitations applicable to dividends received from...

  16. 26 CFR 1.854-1 - Limitations applicable to dividends received from regulated investment company.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... from regulated investment company. 1.854-1 Section 1.854-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Regulated Investment Companies and Real Estate Investment Trusts § 1.854-1 Limitations applicable to dividends received from regulated...

  17. 26 CFR 1.854-1 - Limitations applicable to dividends received from regulated investment company.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... from regulated investment company. 1.854-1 Section 1.854-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Regulated Investment Companies and Real Estate Investment Trusts § 1.854-1 Limitations applicable to dividends received from...

  18. 26 CFR 1.854-1 - Limitations applicable to dividends received from regulated investment company.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... from regulated investment company. 1.854-1 Section 1.854-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Regulated Investment Companies and Real Estate Investment Trusts § 1.854-1 Limitations applicable to dividends received from...

  19. 26 CFR 1.381(c)(14)-1 - Dividend carryover to personal holding company.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... income for the second preceding taxable year is $12,000, the sum of $2,000 (separate excess from N... $12,000 Dividends paid deduction of N Corporation for first preceding taxable year $50,000 Taxable... section 561 for taxable years ending after the date of distribution or transfer for which the acquiring...

  20. Are stock prices too volatile to be justified by the dividend discount model?

    NASA Astrophysics Data System (ADS)

    Akdeniz, Levent; Salih, Aslıhan Altay; Ok, Süleyman Tuluğ

    2007-03-01

    This study investigates excess stock price volatility using the variance bound framework of LeRoy and Porter [The present-value relation: tests based on implied variance bounds, Econometrica 49 (1981) 555-574] and of Shiller [Do stock prices move too much to be justified by subsequent changes in dividends? Am. Econ. Rev. 71 (1981) 421-436.]. The conditional variance bound relationship is examined using cross-sectional data simulated from the general equilibrium asset pricing model of Brock [Asset prices in a production economy, in: J.J. McCall (Ed.), The Economics of Information and Uncertainty, University of Chicago Press, Chicago (for N.B.E.R.), 1982]. Results show that the conditional variance bounds hold, hence, our hypothesis of the validity of the dividend discount model cannot be rejected. Moreover, in our setting, markets are efficient and stock prices are neither affected by herd psychology nor by the outcome of noise trading by naive investors; thus, we are able to control for market efficiency. Consequently, we show that one cannot infer any conclusions about market efficiency from the unconditional variance bounds tests.

  1. On a perturbed Sparre Andersen risk model with multi-layer dividend strategy

    NASA Astrophysics Data System (ADS)

    Yang, Hu; Zhang, Zhimin

    2009-10-01

    In this paper, we consider a perturbed Sparre Andersen risk model, in which the inter-claim times are generalized Erlang(n) distributed. Under the multi-layer dividend strategy, piece-wise integro-differential equations for the discounted penalty functions are derived, and a recursive approach is applied to express the solutions. A numerical example to calculate the ruin probabilities is given to illustrate the solution procedure.

  2. 26 CFR 1.381(c)(25)-1 - Deficiency dividend of a qualified investment entity.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 4 2010-04-01 2010-04-01 false Deficiency dividend of a qualified investment entity. 1.381(c)(25)-1 Section 1.381(c)(25)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Insolvency Reorganizations § 1.381(c)(25...

  3. Commentary: Epidemiology in the era of big data.

    PubMed

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-05-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called "three V's": variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field's future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future.

  4. Big data in psychology: A framework for research advancement.

    PubMed

    Adjerid, Idris; Kelley, Ken

    2018-02-22

    The potential for big data to provide value for psychology is significant. However, the pursuit of big data remains an uncertain and risky undertaking for the average psychological researcher. In this article, we address some of this uncertainty by discussing the potential impact of big data on the type of data available for psychological research, addressing the benefits and most significant challenges that emerge from these data, and organizing a variety of research opportunities for psychology. Our article yields two central insights. First, we highlight that big data research efforts are more readily accessible than many researchers realize, particularly with the emergence of open-source research tools, digital platforms, and instrumentation. Second, we argue that opportunities for big data research are diverse and differ both in their fit for varying research goals, as well as in the challenges they bring about. Ultimately, our outlook for researchers in psychology using and benefiting from big data is cautiously optimistic. Although not all big data efforts are suited for all researchers or all areas within psychology, big data research prospects are diverse, expanding, and promising for psychology and related disciplines. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  5. 26 CFR 1.959-4 - Distributions to United States persons not counting as dividends.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... normal taxes and surtaxes) of subtitle A (relating to income taxes) of the Code as a distribution which... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Distributions to United States persons not... Distributions to United States persons not counting as dividends. Except as provided in section 960(a)(3) and...

  6. Big Opportunities and Big Concerns of Big Data in Education

    ERIC Educational Resources Information Center

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  7. 12 CFR 707.4 - Account disclosures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... rate and annual percentage yield may change; (B) How the dividend rate is determined; (C) The frequency... dividend declaration date might be inaccurate because of known or contemplated dividend rate changes, the... rate changes, the credit union may disclose the prospective dividend rate and prospective annual...

  8. 12 CFR Appendix A to Part 707 - Annual Percentage Yield Calculation

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the member to open, maintain, increase or renew an account. Dividends, interest or other earnings are... may or may not occur in the future. These formulas apply to both dividend-bearing and interest-bearing... not have a stated maturity), the APY can be calculated by use of the following simple formula: APY=100...

  9. Do yield and quality of big bluestem and switchgrass feedstock decline over winter?

    USDA-ARS?s Scientific Manuscript database

    Switchgrass (Panicum virgatum L.) and big bluestem (Andropogon gerdardii Vitman) are potential bioenergy feedstocks for thermochemical platforms. Feedstock storage, fall harvest constraints, and environmental benefits provided by perennials are rationales for developing localized perennial feedstock...

  10. 26 CFR 1.6042-2T - Returns of information as to dividends paid (temporary).

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 13 2014-04-01 2014-04-01 false Returns of information as to dividends paid... in § 1.6049-4(f)(10) or (14), respectively), or reporting Model 1 FFI (as defined in § 1.6049-4(f)(13..., 2014. (g) Expiration date. The applicability of this section expires on February 28, 2017. [T.D. 9658...

  11. 26 CFR 514.22 - Dividends received by persons not entitled to reduced rate of tax.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... TREASURY (CONTINUED) REGULATIONS UNDER TAX CONVENTIONS FRANCE Withholding of Tax Taxable Years Beginning... representative, a dividend from sources within France from which French tax has been withheld at the reduced rate... included in the gross income from sources within France of any beneficiary or partner, as the case may be...

  12. 17 CFR 270.19a-1 - Written statement to accompany dividend payments by management companies.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... that an open-end company may treat as a separate source its net profits from such sales during its... specify the sources from which the remainder was paid. Every company which in any fiscal year elects to... dividend payments by management companies. 270.19a-1 Section 270.19a-1 Commodity and Securities Exchanges...

  13. 26 CFR 514.22 - Dividends received by persons not entitled to reduced rate of tax.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Contracting State in the collection of taxes covered by the convention. (b) Additional French tax to be... dividend from which French tax has been withheld at the reduced rate of 15 percent, who is a nominee or..., shall withhold an additional amount of French tax equivalent to the French tax which would have been...

  14. 26 CFR 514.22 - Dividends received by persons not entitled to reduced rate of tax.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Contracting State in the collection of taxes covered by the convention. (b) Additional French tax to be... dividend from which French tax has been withheld at the reduced rate of 15 percent, who is a nominee or..., shall withhold an additional amount of French tax equivalent to the French tax which would have been...

  15. 26 CFR 514.22 - Dividends received by persons not entitled to reduced rate of tax.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Contracting State in the collection of taxes covered by the convention. (b) Additional French tax to be... dividend from which French tax has been withheld at the reduced rate of 15 percent, who is a nominee or..., shall withhold an additional amount of French tax equivalent to the French tax which would have been...

  16. 26 CFR 514.22 - Dividends received by persons not entitled to reduced rate of tax.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Contracting State in the collection of taxes covered by the convention. (b) Additional French tax to be... dividend from which French tax has been withheld at the reduced rate of 15 percent, who is a nominee or..., shall withhold an additional amount of French tax equivalent to the French tax which would have been...

  17. Frontiers of Big Bang cosmology and primordial nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Mathews, Grant J.; Cheoun, Myung-Ki; Kajino, Toshitaka; Kusakabe, Motohiko; Yamazaki, Dai G.

    2012-11-01

    We summarize some current research on the formation and evolution of the universe and overview some of the key questions surrounding the the big bang. There are really only two observational cosmological probes of the physics of the early universe. Of those two, the only probe during the relevant radiation dominated epoch is the yield of light elements during the epoch of big bang nucleosynthesis. The synthesis of light elements occurs in the temperature regime from 108 to 1010 K and times of about 1 to 104 sec into the big bang. The other probe is the spectrum of temperature fluctuations in the CMB which (among other things) contains information of the first quantum fluctuations in the universe, along with details of the distribution and evolution of dark matter, baryonic matter and photons up to the surface of photon last scattering. Here, we emphasize the role of these probes in answering some key questions of the big bang and early universe cosmology.

  18. On the expected discounted penalty functions for two classes of risk processes under a threshold dividend strategy

    NASA Astrophysics Data System (ADS)

    Lu, Zhaoyang; Xu, Wei; Sun, Decai; Han, Weiguo

    2009-10-01

    In this paper, the discounted penalty (Gerber-Shiu) functions for a risk model involving two independent classes of insurance risks under a threshold dividend strategy are developed. We also assume that the two claim number processes are independent Poisson and generalized Erlang (2) processes, respectively. When the surplus is above this threshold level, dividends are paid at a constant rate that does not exceed the premium rate. Two systems of integro-differential equations for discounted penalty functions are derived, based on whether the surplus is above this threshold level. Laplace transformations of the discounted penalty functions when the surplus is below the threshold level are obtained. And we also derive a system of renewal equations satisfied by the discounted penalty function with initial surplus above the threshold strategy via the Dickson-Hipp operator. Finally, analytical solutions of the two systems of integro-differential equations are presented.

  19. Digital Dividend Aware Business Models for the Creative Industries: Challenges and Opportunities in EU Markets

    NASA Astrophysics Data System (ADS)

    Cossiavelou, Vassiliki

    EU counties have a historically unique opportunity to enable their creative industries to promote the knowledge societies, applying new business models to their media content and networks markets, that are digital dividend (DD) aware. This new extra-media gatekeeping factor could shape new alliances and co operations among the member states and the global media markets, as well.

  20. 30 years of progress toward increased biomass yield of switchgrass and big bluestem

    USDA-ARS?s Scientific Manuscript database

    Breeding to improved biomass production of switchgrass and big bluestem for conversion to bioenergy began in 1992. The purpose of this study was (1) to develop a platform for uniform regional testing of cultivars and experimental populations for these species and (2) to estimate the gains made by br...

  1. A Classroom Game on a Negative Externality Correcting Tax: Revenue Return, Regressivity, and the Double Dividend

    ERIC Educational Resources Information Center

    Duke, Joshua M.; Sassoon, David M.

    2017-01-01

    The concept of negative externality is central to the teaching of environmental economics, but corrective taxes are almost always regressive. How exactly might governments return externality-correcting tax revenue to overcome regressivity and not alter marginal incentives? In addition, there is a desire to achieve a double dividend in the use of…

  2. 26 CFR 1.522-3 - Patronage dividends, rebates, or refunds; treatment as to cooperative associations entitled to...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...; treatment as to cooperative associations entitled to tax treatment under section 522. 1.522-3 Section 1.522-3 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Farmers' Cooperatives § 1.522-3 Patronage dividends, rebates, or refunds...

  3. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    ERIC Educational Resources Information Center

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  4. Meta-analysis of Big Five personality traits in autism spectrum disorder.

    PubMed

    Lodi-Smith, Jennifer; Rodgers, Jonathan D; Cunningham, Sara A; Lopata, Christopher; Thomeer, Marcus L

    2018-04-01

    The present meta-analysis synthesizes the emerging literature on the relationship of Big Five personality traits to autism spectrum disorder. Studies were included if they (1) either (a) measured autism spectrum disorder characteristics using a metric that yielded a single score quantification of the magnitude of autism spectrum disorder characteristics and/or (b) studied individuals with an autism spectrum disorder diagnosis compared to individuals without an autism spectrum disorder diagnosis and (2) measured Big Five traits in the same sample or samples. Fourteen reviewed studies include both correlational analyses and group comparisons. Eighteen effect sizes per Big Five trait were used to calculate two overall effect sizes per trait. Meta-analytic effects were calculated using random effects models. Twelve effects (per trait) from nine studies reporting correlations yielded a negative association between each Big Five personality trait and autism spectrum disorder characteristics (Fisher's z ranged from -.21 (conscientiousness) to -.50 (extraversion)). Six group contrasts (per trait) from six studies comparing individuals diagnosed with autism spectrum disorder to neurotypical individuals were also substantial (Hedges' g ranged from -.88 (conscientiousness) to -1.42 (extraversion)). The potential impact of personality on important life outcomes and new directions for future research on personality in autism spectrum disorder are discussed in light of results.

  5. 17 CFR 229.201 - (Item 201) Market price of and dividends on the registrant's common equity and related...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... dividends on the registrant's common equity and related stockholder matters. 229.201 Section 229.201... the registrant's common equity and related stockholder matters. (a) Market information. (1)(i) Identify the principal United States market or markets in which each class of the registrant's common...

  6. 17 CFR 229.201 - (Item 201) Market price of and dividends on the registrant's common equity and related...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... dividends on the registrant's common equity and related stockholder matters. 229.201 Section 229.201... the registrant's common equity and related stockholder matters. (a) Market information. (1)(i) Identify the principal United States market or markets in which each class of the registrant's common...

  7. 26 CFR 1.611-4 - Depletion as a factor in computing earnings and profits for dividend purposes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 7 2011-04-01 2009-04-01 true Depletion as a factor in computing earnings and profits for dividend purposes. 1.611-4 Section 1.611-4 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Natural Resources § 1...

  8. 26 CFR 1.611-4 - Depletion as a factor in computing earnings and profits for dividend purposes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Depletion as a factor in computing earnings and profits for dividend purposes. 1.611-4 Section 1.611-4 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Natural Resources § 1...

  9. 75 FR 81320 - Self-Regulatory Organizations; NASDAQ OMX PHLX LLC; Notice of Filing of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-27

    .... For example, to calculate the daily total return today, the previous day's closing market price for the component would be subtracted from today's closing market price for the component to determine a... dividend if today were an ``ex-dividend'' date to yield the Price Plus Dividend Difference for the...

  10. Big Data, Big Problems: A Healthcare Perspective.

    PubMed

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  11. 12 CFR Appendix B to Part 707 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... your deposit account is ___% with an annual percentage yield (APY) of ___%. [For purposes of this...-bearing Term Share Accounts The dividend rate on your term share account is ___% with an annual percentage... declaration date/ (date)], the dividend rate was ___% with an annual percentage yield (APY) of ___% on your...

  12. 12 CFR Appendix B to Part 707 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... your deposit account is ___% with an annual percentage yield (APY) of ___%. [For purposes of this...-bearing Term Share Accounts The dividend rate on your term share account is ___% with an annual percentage... declaration date/ (date)], the dividend rate was ___% with an annual percentage yield (APY) of ___% on your...

  13. 12 CFR Appendix B to Part 707 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... your deposit account is ___% with an annual percentage yield (APY) of ___%. [For purposes of this...-bearing Term Share Accounts The dividend rate on your term share account is ___% with an annual percentage... declaration date/ (date)], the dividend rate was ___% with an annual percentage yield (APY) of ___% on your...

  14. 12 CFR Appendix B to Part 707 - Model Clauses and Sample Forms

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... your deposit account is ___% with an annual percentage yield (APY) of ___%. [For purposes of this...-bearing Term Share Accounts The dividend rate on your term share account is ___% with an annual percentage... declaration date/ (date)], the dividend rate was ___% with an annual percentage yield (APY) of ___% on your...

  15. 12 CFR 707.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... new or existing account. (c) Annual percentage yield means a percentage rate reflecting the total amount of dividends paid on an account, based on the dividend rate and the frequency of compounding for a... which the simple dividend rate may change after the account is opened, unless the credit union contracts...

  16. Comprehension and Analysis of Information in Text. III. Sentence Construction, Evaluation and Use. Addendum.

    DTIC Science & Technology

    1980-07-01

    companies. 6) Dividends --past and anticipated payments to stockholders. Next, we selected 211 sentences from various sources of financial data, such as...share can be expected in the near future, the payout ratio may decline. 232 Company dividend yield is normal for the industry. 233 Directors recently...Disappointing performance of the new series 𔃺’ printers put ECTEX in the red. 24 DIVIDENDS 333 Dividend payout has grown appreciably in the past 2

  17. Small scale sequence automation pays big dividends

    NASA Technical Reports Server (NTRS)

    Nelson, Bill

    1994-01-01

    Galileo sequence design and integration are supported by a suite of formal software tools. Sequence review, however, is largely a manual process with reviewers scanning hundreds of pages of cryptic computer printouts to verify sequence correctness. Beginning in 1990, a series of small, PC based sequence review tools evolved. Each tool performs a specific task but all have a common 'look and feel'. The narrow focus of each tool means simpler operation, and easier creation, testing, and maintenance. Benefits from these tools are (1) decreased review time by factors of 5 to 20 or more with a concomitant reduction in staffing, (2) increased review accuracy, and (3) excellent returns on time invested.

  18. BIG BANG NUCLEOSYNTHESIS WITH A NON-MAXWELLIAN DISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertulani, C. A.; Fuqua, J.; Hussein, M. S.

    The abundances of light elements based on the big bang nucleosynthesis model are calculated using the Tsallis non-extensive statistics. The impact of the variation of the non-extensive parameter q from the unity value is compared to observations and to the abundance yields from the standard big bang model. We find large differences between the reaction rates and the abundance of light elements calculated with the extensive and the non-extensive statistics. We found that the observations are consistent with a non-extensive parameter q = 1{sub -} {sub 0.12}{sup +0.05}, indicating that a large deviation from the Boltzmann-Gibbs statistics (q = 1)more » is highly unlikely.« less

  19. Physical properties of superbulky lanthanide metallocenes: synthesis and extraordinary luminescence of [Eu(II)(Cp(BIG))2] (Cp(BIG) = (4-nBu-C6H4)5-cyclopentadienyl).

    PubMed

    Harder, Sjoerd; Naglav, Dominik; Ruspic, Christian; Wickleder, Claudia; Adlung, Matthias; Hermes, Wilfried; Eul, Matthias; Pöttgen, Rainer; Rego, Daniel B; Poineau, Frederic; Czerwinski, Kenneth R; Herber, Rolfe H; Nowik, Israel

    2013-09-09

    The superbulky deca-aryleuropocene [Eu(Cp(BIG))2], Cp(BIG) = (4-nBu-C6H4)5-cyclopentadienyl, was prepared by reaction of [Eu(dmat)2(thf)2], DMAT = 2-Me2N-α-Me3Si-benzyl, with two equivalents of Cp(BIG)H. Recrystallizyation from cold hexane gave the product with a surprisingly bright and efficient orange emission (45% quantum yield). The crystal structure is isomorphic to those of [M(Cp(BIG))2] (M = Sm, Yb, Ca, Ba) and shows the typical distortions that arise from Cp(BIG)⋅⋅⋅Cp(BIG) attraction as well as excessively large displacement parameter for the heavy Eu atom (U(eq) = 0.075). In order to gain information on the true oxidation state of the central metal in superbulky metallocenes [M(Cp(BIG))2] (M = Sm, Eu, Yb), several physical analyses have been applied. Temperature-dependent magnetic susceptibility data of [Yb(Cp(BIG))2] show diamagnetism, indicating stable divalent ytterbium. Temperature-dependent (151)Eu Mössbauer effect spectroscopic examination of [Eu(Cp(BIG))2] was examined over the temperature range 93-215 K and the hyperfine and dynamical properties of the Eu(II) species are discussed in detail. The mean square amplitude of vibration of the Eu atom as a function of temperature was determined and compared to the value extracted from the single-crystal X-ray data at 203 K. The large difference in these two values was ascribed to the presence of static disorder and/or the presence of low-frequency torsional and librational modes in [Eu(Cp(BIG))2]. X-ray absorbance near edge spectroscopy (XANES) showed that all three [Ln(Cp(BIG))2] (Ln = Sm, Eu, Yb) compounds are divalent. The XANES white-line spectra are at 8.3, 7.3, and 7.8 eV, for Sm, Eu, and Yb, respectively, lower than the Ln2O3 standards. No XANES temperature dependence was found from room temperature to 100 K. XANES also showed that the [Ln(Cp(BIG))2] complexes had less trivalent impurity than a [EuI2(thf)x] standard. The complex [Eu(Cp(BIG))2] shows already at room temperature

  20. A demographic dividend of the FP2020 Initiative and the SDG reproductive health target: Case studies of India and Nigeria

    PubMed Central

    Li, Qingfeng; Rimon, Jose G.

    2018-01-01

    Background: The demographic dividend, defined as the economic growth potential resulting from favorable shifts in population age structure following rapid fertility decline, has been widely employed to advocate improving access to family planning. The current framework focuses on the long-term potential, while the short-term benefits may also help persuade policy makers to invest in family planning. Methods: We estimate the short- and medium-term economic benefits from two major family planning goals: the Family Planning 2020 (FP2020)’s goal of adding 120 million modern contraceptive users by 2020; Sustainable Development Goals (SDG) 3.7 of ensuring universal access to family planning by 2030. We apply the cohort component method to World Population Prospects and National Transfer Accounts data. India and Nigeria, respectively the most populous Asian and African country under the FP2020 initiative, are used as case studies. Results: Meeting the FP2020 target implies that on average, the number of children that need to be supported by every 100 working-age people would decrease by 8 persons in India and 11 persons in Nigeria in 2020; the associated reduction remains at 8 persons in India, but increases to 14 persons in Nigeria by 2030 under the SDG 3.7. In India meeting the FP2020 target would yield a saving of US$18.2 billion (PPP) in consumption expenditures for children and youth in the year 2020 alone, and that increased to US$89.7 billion by 2030. In Nigeria the consumption saved would be US$2.5 billion in 2020 and $12.9 billion by 2030. Conclusions: The tremendous economic benefits from meeting the FP2020 and SDG family planning targets demonstrate the cost-effectiveness of investment in promoting access to contraceptive methods. The gap already apparent between the observed and targeted trajectories indicates tremendous missing opportunities. Accelerated progress is needed to achieve the FP2020 and SDG goals and so reap the demographic dividend. PMID

  1. A demographic dividend of the FP2020 Initiative and the SDG reproductive health target: Case studies of India and Nigeria.

    PubMed

    Li, Qingfeng; Rimon, Jose G

    2018-02-22

    Background: The demographic dividend, defined as the economic growth potential resulting from favorable shifts in population age structure following rapid fertility decline, has been widely employed to advocate improving access to family planning. The current framework focuses on the long-term potential, while the short-term benefits may also help persuade policy makers to invest in family planning. Methods: We estimate the short- and medium-term economic benefits from two major family planning goals: the Family Planning 2020 (FP2020)'s goal of adding 120 million modern contraceptive users by 2020; Sustainable Development Goals (SDG) 3.7 of ensuring universal access to family planning by 2030. We apply the cohort component method to World Population Prospects and National Transfer Accounts data. India and Nigeria, respectively the most populous Asian and African country under the FP2020 initiative, are used as case studies. Results: Meeting the FP2020 target implies that on average, the number of children that need to be supported by every 100 working-age people would decrease by 8 persons in India and 11 persons in Nigeria in 2020; the associated reduction remains at 8 persons in India, but increases to 14 persons in Nigeria by 2030 under the SDG 3.7. In India meeting the FP2020 target would yield a saving of US$18.2 billion (PPP) in consumption expenditures for children and youth in the year 2020 alone, and that increased to US$89.7 billion by 2030. In Nigeria the consumption saved would be US$2.5 billion in 2020 and $12.9 billion by 2030. Conclusions: The tremendous economic benefits from meeting the FP2020 and SDG family planning targets demonstrate the cost-effectiveness of investment in promoting access to contraceptive methods. The gap already apparent between the observed and targeted trajectories indicates tremendous missing opportunities. Accelerated progress is needed to achieve the FP2020 and SDG goals and so reap the demographic dividend.

  2. How Big Is Too Big?

    ERIC Educational Resources Information Center

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  3. BigDog

    NASA Astrophysics Data System (ADS)

    Playter, R.; Buehler, M.; Raibert, M.

    2006-05-01

    BigDog's goal is to be the world's most advanced quadruped robot for outdoor applications. BigDog is aimed at the mission of a mechanical mule - a category with few competitors to date: power autonomous quadrupeds capable of carrying significant payloads, operating outdoors, with static and dynamic mobility, and fully integrated sensing. BigDog is about 1 m tall, 1 m long and 0.3 m wide, and weighs about 90 kg. BigDog has demonstrated walking and trotting gaits, as well as standing up and sitting down. Since its creation in the fall of 2004, BigDog has logged tens of hours of walking, climbing and running time. It has walked up and down 25 & 35 degree inclines and trotted at speeds up to 1.8 m/s. BigDog has walked at 0.7 m/s over loose rock beds and carried over 50 kg of payload. We are currently working to expand BigDog's rough terrain mobility through the creation of robust locomotion strategies and terrain sensing capabilities.

  4. VEBAs--ordinary and necessary expenses--deductions and constructive dividends. Neonatology Associates, P.A. v. Commissioner of Internal Revenue.

    PubMed

    2004-01-01

    Contributions made by professional medical corporations into voluntary employee benefit program plans (VEBAs), which were well in excess of the cost of the term life insurance provided to the participants, were not ordinary and necessary business expenses, and the distributions of surplus cash to owner physicians upon conversion to individual policies constituted constructive dividends taxable to the individual taxpayers.

  5. Nursing Needs Big Data and Big Data Needs Nursing.

    PubMed

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  6. Big five personality factors and suicide rates in the United States: a state-level analysis.

    PubMed

    Voracek, Martin

    2009-08-01

    Partly replicating findings from several cross-national studies (of Lester and of Voracek) on possible aggregate-level associations between personality and suicide prevalence, state-level analysis within the United States yielded significantly negative associations between the Big Five factor of Neuroticism and suicide rates. This effect was observed for historical as well as contemporary suicide rates of the total or the elderly population and was preserved with controls for the four other Big Five factors and measures of state wealth. Also conforming to cross-national findings, the Big Five factors of Agreeableness and Extraversion were negatively, albeit not reliably, associated with suicide rates.

  7. Safety belts : the uncollected dividends : a manual for use by state legislators and state officials on techniques to increase safety belt usage.

    DOT National Transportation Integrated Search

    1977-09-01

    Recognizing that increased safety belt usage is by far the most cost-effective I highway safety measure that can be undertaken by any State, this project describes how. States can initiate action to collect major dividends in cost and human welfare.

  8. 26 CFR 521.108 - Exemption from, or reduction in rate of, United States tax in the case of dividends, interest and...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... General Income Tax Taxation of Nonresident Aliens Who Are Residents of Denmark and of Danish Corporations... dividends received from sources within the United States by (i) a nonresident alien (including a nonresident alien individual, fiduciary and partnership) who is a resident of Denmark, or (ii) a Danish corporation...

  9. 26 CFR 521.108 - Exemption from, or reduction in rate of, United States tax in the case of dividends, interest and...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... General Income Tax Taxation of Nonresident Aliens Who Are Residents of Denmark and of Danish Corporations... dividends received from sources within the United States by (i) a nonresident alien (including a nonresident alien individual, fiduciary and partnership) who is a resident of Denmark, or (ii) a Danish corporation...

  10. 26 CFR 521.108 - Exemption from, or reduction in rate of, United States tax in the case of dividends, interest and...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... General Income Tax Taxation of Nonresident Aliens Who Are Residents of Denmark and of Danish Corporations... dividends received from sources within the United States by (i) a nonresident alien (including a nonresident alien individual, fiduciary and partnership) who is a resident of Denmark, or (ii) a Danish corporation...

  11. 26 CFR 521.108 - Exemption from, or reduction in rate of, United States tax in the case of dividends, interest and...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... General Income Tax Taxation of Nonresident Aliens Who Are Residents of Denmark and of Danish Corporations... dividends received from sources within the United States by (i) a nonresident alien (including a nonresident alien individual, fiduciary and partnership) who is a resident of Denmark, or (ii) a Danish corporation...

  12. 26 CFR 521.108 - Exemption from, or reduction in rate of, United States tax in the case of dividends, interest and...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... General Income Tax Taxation of Nonresident Aliens Who Are Residents of Denmark and of Danish Corporations... dividends received from sources within the United States by (i) a nonresident alien (including a nonresident alien individual, fiduciary and partnership) who is a resident of Denmark, or (ii) a Danish corporation...

  13. BigBWA: approaching the Burrows-Wheeler aligner to Big Data technologies.

    PubMed

    Abuín, José M; Pichel, Juan C; Pena, Tomás F; Amigo, Jorge

    2015-12-15

    BigBWA is a new tool that uses the Big Data technology Hadoop to boost the performance of the Burrows-Wheeler aligner (BWA). Important reductions in the execution times were observed when using this tool. In addition, BigBWA is fault tolerant and it does not require any modification of the original BWA source code. BigBWA is available at the project GitHub repository: https://github.com/citiususc/BigBWA. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Accelerator boom hones China's engineering expertise

    NASA Astrophysics Data System (ADS)

    Normile, Dennis

    2018-02-01

    In raising the curtain on the China Spallation Neutron Source, China has joined just four other nations in having mastered the technology of accelerating and controlling beams of protons. The $277 million facility, set to open to users this spring in Dongguan, is expected to yield big dividends in materials science, chemistry, and biology. More world class machines are on the way, as China this year starts construction on four other major accelerator facilities. The building boom is prompting a scramble to find enough engineers and technicians to finish the projects. But if they all come off as planned, the facilities would position China to tackle the next global megaproject: a giant accelerator that would pick up where Europe's Large Hadron Collider leaves off.

  15. Benchmarking Big Data Systems and the BigData Top100 List.

    PubMed

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  16. Big data, big knowledge: big data for personalized healthcare.

    PubMed

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  17. 17 CFR 275.205-1 - Definition of “investment performance” of an investment company and “investment record” of an...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... & poor's 500 stock composite index for calendar 1971] Quarterly ending— Index value 1 Quarterly dividend yield-composite index Annual percent 2 Quarterly percent 3 (1/4 of annual)> Dec. 1970 92.15 Mar. 1971... Investment record of Standard & Poor's 500 stock composite index assuming quarterly reinvestment dividends...

  18. Meteor Observations as Big Data Citizen Science

    NASA Astrophysics Data System (ADS)

    Gritsevich, M.; Vinkovic, D.; Schwarz, G.; Nina, A.; Koschny, D.; Lyytinen, E.

    2016-12-01

    Meteor science represents an excellent example of the citizen science project, where progress in the field has been largely determined by amateur observations. Over the last couple of decades technological advancements in observational techniques have yielded drastic improvements in the quality, quantity and diversity of meteor data, while even more ambitious instruments are about to become operational. This empowers meteor science to boost its experimental and theoretical horizons and seek more advanced scientific goals. We review some of the developments that push meteor science into the Big Data era that requires more complex methodological approaches through interdisciplinary collaborations with other branches of physics and computer science. We argue that meteor science should become an integral part of large surveys in astronomy, aeronomy and space physics, and tackle the complexity of micro-physics of meteor plasma and its interaction with the atmosphere. The recent increased interest in meteor science triggered by the Chelyabinsk fireball helps in building the case for technologically and logistically more ambitious meteor projects. This requires developing new methodological approaches in meteor research, with Big Data science and close collaboration between citizen science, geoscience and astronomy as critical elements. We discuss possibilities for improvements and promote an opportunity for collaboration in meteor science within the currently established BigSkyEarth http://bigskyearth.eu/ network.

  19. Big data uncertainties.

    PubMed

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  20. 17 CFR 275.205-1 - Definition of “investment performance” of an investment company and “investment record” of an...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... & poor's 500 stock composite index for calendar 1971] Quarterly ending— Index value 1 Quarterly dividend yield-composite index Annual percent 2 Quarterly percent 3 (1/4 of annual) Dec. 1970 92.15 Mar. 1971 100... Investment record of Standard & Poor's 500 stock composite index assuming quarterly reinvestment dividends...

  1. 17 CFR 275.205-1 - Definition of “investment performance” of an investment company and “investment record” of an...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... & poor's 500 stock composite index for calendar 1971] Quarterly ending— Index value 1 Quarterly dividend yield-composite index Annual percent 2 Quarterly percent 3 (1/4 of annual) Dec. 1970 92.15 Mar. 1971 100... Investment record of Standard & Poor's 500 stock composite index assuming quarterly reinvestment dividends...

  2. 17 CFR 275.205-1 - Definition of “investment performance” of an investment company and “investment record” of an...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... & poor's 500 stock composite index for calendar 1971] Quarterly ending— Index value 1 Quarterly dividend yield-composite index Annual percent 2 Quarterly percent 3 (1/4 of annual) Dec. 1970 92.15 Mar. 1971 100... Investment record of Standard & Poor's 500 stock composite index assuming quarterly reinvestment dividends...

  3. 17 CFR 275.205-1 - Definition of “investment performance” of an investment company and “investment record” of an...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... & poor's 500 stock composite index for calendar 1971] Quarterly ending— Index value 1 Quarterly dividend yield-composite index Annual percent 2 Quarterly percent 3 (1/4 of annual) Dec. 1970 92.15 Mar. 1971 100... Investment record of Standard & Poor's 500 stock composite index assuming quarterly reinvestment dividends...

  4. Kill ratio calculation for in-line yield prediction

    NASA Astrophysics Data System (ADS)

    Lorenzo, Alfonso; Oter, David; Cruceta, Sergio; Valtuena, Juan F.; Gonzalez, Gerardo; Mata, Carlos

    1999-04-01

    The search for better yields in IC manufacturing calls for a smarter use of the vast amount of data that can be generated by a world class production line.In this scenario, in-line inspection processes produce thousands of wafer maps, number of defects, defect type and pictures every day. A step forward is to correlate these with the other big data- generator area: test. In this paper, we present how these data can be put together and correlated to obtain a very useful yield predicting tool. This correlation will first allow us to calculate the kill ratio, i.e. the probability for a defect of a certain size in a certain layer to kill the die. Then we will use that number to estimate the cosmetic yield that a wafer will have.

  5. Five Big Ideas

    ERIC Educational Resources Information Center

    Morgan, Debbie

    2012-01-01

    Designing quality continuing professional development (CPD) for those teaching mathematics in primary schools is a challenge. If the CPD is to be built on the scaffold of five big ideas in mathematics, what might be these five big ideas? Might it just be a case of, if you tell me your five big ideas, then I'll tell you mine? Here, there is…

  6. Complexity Science Framework for Big Data: Data-enabled Science

    NASA Astrophysics Data System (ADS)

    Surjalal Sharma, A.

    2016-07-01

    such new analytics can yield improved risk estimates. The challenges of scientific inference from complex and massive data are addressed by data-enabled science, also referred as the Fourth paradigm, after experiment, theory and simulation. An example of this approach is the modelling of dynamical and statistical features of natural systems, without assumptions of specific processes. An effective use of the techniques of complexity science to yield the inherent features of a system from extensive data from observations and large scale numerical simulations is evident in the case of Earth's magnetosphere. The multiscale nature of the magnetosphere makes the numerical simulations a challenge, requiring very large computing resources. The reconstruction of dynamics from observational data can however yield the inherent characteristics using typical desktop computers. Such studies for other systems are in progress. Data-enabled approach using the framework of complexity science provides new techniques for modelling and prediction using Big Data. The studies of Earth's magnetosphere, provide an example of the potential for a new approach to the development of quantitative analytic tools.

  7. Empowering Personalized Medicine with Big Data and Semantic Web Technology: Promises, Challenges, and Use Cases.

    PubMed

    Panahiazar, Maryam; Taslimitehrani, Vahid; Jadhav, Ashutosh; Pathak, Jyotishman

    2014-10-01

    In healthcare, big data tools and technologies have the potential to create significant value by improving outcomes while lowering costs for each individual patient. Diagnostic images, genetic test results and biometric information are increasingly generated and stored in electronic health records presenting us with challenges in data that is by nature high volume, variety and velocity, thereby necessitating novel ways to store, manage and process big data. This presents an urgent need to develop new, scalable and expandable big data infrastructure and analytical methods that can enable healthcare providers access knowledge for the individual patient, yielding better decisions and outcomes. In this paper, we briefly discuss the nature of big data and the role of semantic web and data analysis for generating "smart data" which offer actionable information that supports better decision for personalized medicine. In our view, the biggest challenge is to create a system that makes big data robust and smart for healthcare providers and patients that can lead to more effective clinical decision-making, improved health outcomes, and ultimately, managing the healthcare costs. We highlight some of the challenges in using big data and propose the need for a semantic data-driven environment to address them. We illustrate our vision with practical use cases, and discuss a path for empowering personalized medicine using big data and semantic web technology.

  8. Molecular evolution of colorectal cancer: from multistep carcinogenesis to the big bang.

    PubMed

    Amaro, Adriana; Chiara, Silvana; Pfeffer, Ulrich

    2016-03-01

    Colorectal cancer is characterized by exquisite genomic instability either in the form of microsatellite instability or chromosomal instability. Microsatellite instability is the result of mutation of mismatch repair genes or their silencing through promoter methylation as a consequence of the CpG island methylator phenotype. The molecular causes of chromosomal instability are less well characterized. Genomic instability and field cancerization lead to a high degree of intratumoral heterogeneity and determine the formation of cancer stem cells and epithelial-mesenchymal transition mediated by the TGF-β and APC pathways. Recent analyses using integrated genomics reveal different phases of colorectal cancer evolution. An initial phase of genomic instability that yields many clones with different mutations (big bang) is followed by an important, previously not detected phase of cancer evolution that consists in the stabilization of several clones and a relatively flat outgrowth. The big bang model can best explain the coexistence of several stable clones and is compatible with the fact that the analysis of the bulk of the primary tumor yields prognostic information.

  9. Water resources in the Big Lost River Basin, south-central Idaho

    USGS Publications Warehouse

    Crosthwaite, E.G.; Thomas, C.A.; Dyer, K.L.

    1970-01-01

    The Big Lost River basin occupies about 1,400 square miles in south-central Idaho and drains to the Snake River Plain. The economy in the area is based on irrigation agriculture and stockraising. The basin is underlain by a diverse-assemblage of rocks which range, in age from Precambrian to Holocene. The assemblage is divided into five groups on the basis of their hydrologic characteristics. Carbonate rocks, noncarbonate rocks, cemented alluvial deposits, unconsolidated alluvial deposits, and basalt. The principal aquifer is unconsolidated alluvial fill that is several thousand feet thick in the main valley. The carbonate rocks are the major bedrock aquifer. They absorb a significant amount of precipitation and, in places, are very permeable as evidenced by large springs discharging from or near exposures of carbonate rocks. Only the alluvium, carbonate rock and locally the basalt yield significant amounts of water. A total of about 67,000 acres is irrigated with water diverted from the Big Lost River. The annual flow of the river is highly variable and water-supply deficiencies are common. About 1 out of every 2 years is considered a drought year. In the period 1955-68, about 175 irrigation wells were drilled to provide a supplemental water supply to land irrigated from the canal system and to irrigate an additional 8,500 acres of new land. Average. annual precipitation ranged from 8 inches on the valley floor to about 50 inches at some higher elevations during the base period 1944-68. The estimated water yield of the Big Lost River basin averaged 650 cfs (cubic feet per second) for the base period. Of this amount, 150 cfs was transpired by crops, 75 cfs left the basin as streamflow, and 425 cfs left as ground-water flow. A map of precipitation and estimated values of evapotranspiration were used to construct a water-yield map. A distinctive feature of the Big Lost River basin, is the large interchange of water from surface streams into the ground and from the

  10. 26 CFR 1.1298-3 - Deemed sale or deemed dividend election by a U.S. person that is a shareholder of a former PFIC.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... its stock in the former PFIC for its fair market value on the termination date, as defined in... sale election, the shareholder's stock with respect to which the election was made under this paragraph... the deemed dividend election, the shareholder's stock with respect to which the election was made...

  11. Native Perennial Forb Variation Between Mountain Big Sagebrush and Wyoming Big Sagebrush Plant Communities

    NASA Astrophysics Data System (ADS)

    Davies, Kirk W.; Bates, Jon D.

    2010-09-01

    Big sagebrush ( Artemisia tridentata Nutt.) occupies large portions of the western United States and provides valuable wildlife habitat. However, information is lacking quantifying differences in native perennial forb characteristics between mountain big sagebrush [ A. tridentata spp. vaseyana (Rydb.) Beetle] and Wyoming big sagebrush [ A. tridentata spp. wyomingensis (Beetle & A. Young) S.L. Welsh] plant communities. This information is critical to accurately evaluate the quality of habitat and forage that these communities can produce because many wildlife species consume large quantities of native perennial forbs and depend on them for hiding cover. To compare native perennial forb characteristics on sites dominated by these two subspecies of big sagebrush, we sampled 106 intact big sagebrush plant communities. Mountain big sagebrush plant communities produced almost 4.5-fold more native perennial forb biomass and had greater native perennial forb species richness and diversity compared to Wyoming big sagebrush plant communities ( P < 0.001). Nonmetric multidimensional scaling (NMS) and the multiple-response permutation procedure (MRPP) demonstrated that native perennial forb composition varied between these plant communities ( P < 0.001). Native perennial forb composition was more similar within plant communities grouped by big sagebrush subspecies than expected by chance ( A = 0.112) and composition varied between community groups ( P < 0.001). Indicator analysis did not identify any perennial forbs that were completely exclusive and faithful, but did identify several perennial forbs that were relatively good indicators of either mountain big sagebrush or Wyoming big sagebrush plant communities. Our results suggest that management plans and habitat guidelines should recognize differences in native perennial forb characteristics between mountain and Wyoming big sagebrush plant communities.

  12. Big Data and Neuroimaging.

    PubMed

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  13. Cryptography for Big Data Security

    DTIC Science & Technology

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  14. Data: Big and Small.

    PubMed

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  15. Big Data in industry

    NASA Astrophysics Data System (ADS)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  16. The big data-big model (BDBM) challenges in ecological research

    NASA Astrophysics Data System (ADS)

    Luo, Y.

    2015-12-01

    The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple

  17. Big Machines and Big Science: 80 Years of Accelerators at Stanford

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loew, Gregory

    2008-12-16

    Longtime SLAC physicist Greg Loew will present a trip through SLAC's origins, highlighting its scientific achievements, and provide a glimpse of the lab's future in 'Big Machines and Big Science: 80 Years of Accelerators at Stanford.'

  18. Investing in Kids: Early Childhood Programs and Local Economic Development

    ERIC Educational Resources Information Center

    Bartik, Timothy J.

    2011-01-01

    Early childhood programs, if designed correctly, pay big economic dividends down the road because they increase the skills of their participants. And since many of those participants will remain in the same state or local area as adults, the local economy benefits: more persons with better skills attract business, which provides more and better…

  19. Real-time yield estimation based on deep learning

    NASA Astrophysics Data System (ADS)

    Rahnemoonfar, Maryam; Sheppard, Clay

    2017-05-01

    Crop yield estimation is an important task in product management and marketing. Accurate yield prediction helps farmers to make better decision on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits is very time consuming and expensive process and it is not practical for big fields. Robotic systems including Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles (UGV), provide an efficient, cost-effective, flexible, and scalable solution for product management and yield prediction. Recently huge data has been gathered from agricultural field, however efficient analysis of those data is still a challenging task. Computer vision approaches currently face diffident challenges in automatic counting of fruits or flowers including occlusion caused by leaves, branches or other fruits, variance in natural illumination, and scale. In this paper a novel deep convolutional network algorithm was developed to facilitate the accurate yield prediction and automatic counting of fruits and vegetables on the images. Our method is robust to occlusion, shadow, uneven illumination and scale. Experimental results in comparison to the state-of-the art show the effectiveness of our algorithm.

  20. The case for the relativistic hot big bang cosmology

    NASA Technical Reports Server (NTRS)

    Peebles, P. J. E.; Schramm, D. N.; Kron, R. G.; Turner, E. L.

    1991-01-01

    What has become the standard model in cosmology is described, and some highlights are presented of the now substantial range of evidence that most cosmologists believe convincingly establishes this model, the relativistic hot big bang cosmology. It is shown that this model has yielded a set of interpretations and successful predictions that substantially outnumber the elements used in devising the theory, with no well-established empirical contradictions. Brief speculations are made on how the open puzzles and work in progress might affect future developments in this field.

  1. Big data need big theory too

    PubMed Central

    Dougherty, Edward R.; Highfield, Roger R.

    2016-01-01

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their ‘depth’ and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote ‘blind’ big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’. PMID:27698035

  2. Big data need big theory too.

    PubMed

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  3. Increased plasma levels of big-endothelin-2 and big-endothelin-3 in patients with end-stage renal disease.

    PubMed

    Miyauchi, Yumi; Sakai, Satoshi; Maeda, Seiji; Shimojo, Nobutake; Watanabe, Shigeyuki; Honma, Satoshi; Kuga, Keisuke; Aonuma, Kazutaka; Miyauchi, Takashi

    2012-10-15

    Big endothelins (pro-endothelin; inactive-precursor) are converted to biologically active endothelins (ETs). Mammals and humans produce three ET family members: ET-1, ET-2 and ET-3, from three different genes. Although ET-1 is produced by vascular endothelial cells, these cells do not produce ET-3, which is produced by neuronal cells and organs such as the thyroid, salivary gland and the kidney. In patients with end-stage renal disease, abnormal vascular endothelial cell function and elevated plasma ET-1 and big ET-1 levels have been reported. It is unknown whether big ET-2 and big ET-3 plasma levels are altered in these patients. The purpose of the present study was to determine whether endogenous ET-1, ET-2, and ET-3 systems including big ETs are altered in patients with end-stage renal disease. We measured plasma levels of ET-1, ET-3 and big ET-1, big ET-2, and big ET-3 in patients on chronic hemodialysis (n=23) and age-matched healthy subjects (n=17). In patients on hemodialysis, plasma levels (measured just before hemodialysis) of both ET-1 and ET-3 and big ET-1, big ET-2, and big ET-3 were markedly elevated, and the increase was higher for big ETs (Big ET-1, 4-fold; big ET-2, 6-fold; big ET-3: 5-fold) than for ETs (ET-1, 1.7-fold; ET-3, 2-fold). In hemodialysis patients, plasma levels of the inactive precursors big ET-1, big ET-2, and big ET-3 levels are markedly increased, yet there is only a moderate increase in plasma levels of the active products, ET-1 and ET-3. This suggests that the activity of endothelin converting enzyme contributing to circulating levels of ET-1 and ET-3 may be decreased in patients on chronic hemodialysis. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Big Data and medicine: a big deal?

    PubMed

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  5. Untapped Potential: Fulfilling the Promise of Big Brothers Big Sisters and the Bigs and Littles They Represent

    ERIC Educational Resources Information Center

    Bridgeland, John M.; Moore, Laura A.

    2010-01-01

    American children represent a great untapped potential in our country. For many young people, choices are limited and the goal of a productive adulthood is a remote one. This report paints a picture of who these children are, shares their insights and reflections about the barriers they face, and offers ways forward for Big Brothers Big Sisters as…

  6. Development and Validation of Big Four Personality Scales for the Schedule for Nonadaptive and Adaptive Personality-2nd Edition (SNAP-2)

    PubMed Central

    Calabrese, William R.; Rudick, Monica M.; Simms, Leonard J.; Clark, Lee Anna

    2012-01-01

    Recently, integrative, hierarchical models of personality and personality disorder (PD)—such as the Big Three, Big Four and Big Five trait models—have gained support as a unifying dimensional framework for describing PD. However, no measures to date can simultaneously represent each of these potentially interesting levels of the personality hierarchy. To unify these measurement models psychometrically, we sought to develop Big Five trait scales within the Schedule for Adaptive and Nonadaptive Personality–2nd Edition (SNAP-2). Through structural and content analyses, we examined relations between the SNAP-2, Big Five Inventory (BFI), and NEO-Five Factor Inventory (NEO-FFI) ratings in a large data set (N = 8,690), including clinical, military, college, and community participants. Results yielded scales consistent with the Big Four model of personality (i.e., Neuroticism, Conscientiousness, Introversion, and Antagonism) and not the Big Five as there were insufficient items related to Openness. Resulting scale scores demonstrated strong internal consistency and temporal stability. Structural and external validity was supported by strong convergent and discriminant validity patterns between Big Four scale scores and other personality trait scores and expectable patterns of self-peer agreement. Descriptive statistics and community-based norms are provided. The SNAP-2 Big Four Scales enable researchers and clinicians to assess personality at multiple levels of the trait hierarchy and facilitate comparisons among competing “Big Trait” models. PMID:22250598

  7. Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.

    PubMed

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-12-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.

  8. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    ERIC Educational Resources Information Center

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  9. Implementing Big History.

    ERIC Educational Resources Information Center

    Welter, Mark

    2000-01-01

    Contends that world history should be taught as "Big History," a view that includes all space and time beginning with the Big Bang. Discusses five "Cardinal Questions" that serve as a course structure and address the following concepts: perspectives, diversity, change and continuity, interdependence, and causes. (CMK)

  10. Big data for health.

    PubMed

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  11. Development and validation of Big Four personality scales for the Schedule for Nonadaptive and Adaptive Personality--Second Edition (SNAP-2).

    PubMed

    Calabrese, William R; Rudick, Monica M; Simms, Leonard J; Clark, Lee Anna

    2012-09-01

    Recently, integrative, hierarchical models of personality and personality disorder (PD)--such as the Big Three, Big Four, and Big Five trait models--have gained support as a unifying dimensional framework for describing PD. However, no measures to date can simultaneously represent each of these potentially interesting levels of the personality hierarchy. To unify these measurement models psychometrically, we sought to develop Big Five trait scales within the Schedule for Nonadaptive and Adaptive Personality--Second Edition (SNAP-2). Through structural and content analyses, we examined relations between the SNAP-2, the Big Five Inventory (BFI), and the NEO Five-Factor Inventory (NEO-FFI) ratings in a large data set (N = 8,690), including clinical, military, college, and community participants. Results yielded scales consistent with the Big Four model of personality (i.e., Neuroticism, Conscientiousness, Introversion, and Antagonism) and not the Big Five, as there were insufficient items related to Openness. Resulting scale scores demonstrated strong internal consistency and temporal stability. Structural validity and external validity were supported by strong convergent and discriminant validity patterns between Big Four scale scores and other personality trait scores and expectable patterns of self-peer agreement. Descriptive statistics and community-based norms are provided. The SNAP-2 Big Four Scales enable researchers and clinicians to assess personality at multiple levels of the trait hierarchy and facilitate comparisons among competing big-trait models. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  12. Perspectives on making big data analytics work for oncology.

    PubMed

    El Naqa, Issam

    2016-12-01

    Oncology, with its unique combination of clinical, physical, technological, and biological data provides an ideal case study for applying big data analytics to improve cancer treatment safety and outcomes. An oncology treatment course such as chemoradiotherapy can generate a large pool of information carrying the 5Vs hallmarks of big data. This data is comprised of a heterogeneous mixture of patient demographics, radiation/chemo dosimetry, multimodality imaging features, and biological markers generated over a treatment period that can span few days to several weeks. Efforts using commercial and in-house tools are underway to facilitate data aggregation, ontology creation, sharing, visualization and varying analytics in a secure environment. However, open questions related to proper data structure representation and effective analytics tools to support oncology decision-making need to be addressed. It is recognized that oncology data constitutes a mix of structured (tabulated) and unstructured (electronic documents) that need to be processed to facilitate searching and subsequent knowledge discovery from relational or NoSQL databases. In this context, methods based on advanced analytics and image feature extraction for oncology applications will be discussed. On the other hand, the classical p (variables)≫n (samples) inference problem of statistical learning is challenged in the Big data realm and this is particularly true for oncology applications where p-omics is witnessing exponential growth while the number of cancer incidences has generally plateaued over the past 5-years leading to a quasi-linear growth in samples per patient. Within the Big data paradigm, this kind of phenomenon may yield undesirable effects such as echo chamber anomalies, Yule-Simpson reversal paradox, or misleading ghost analytics. In this work, we will present these effects as they pertain to oncology and engage small thinking methodologies to counter these effects ranging from

  13. Big Data: Implications for Health System Pharmacy

    PubMed Central

    Stokes, Laura B.; Rogers, Joseph W.; Hertig, John B.; Weber, Robert J.

    2016-01-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services. PMID:27559194

  14. Big Data: Implications for Health System Pharmacy.

    PubMed

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  15. BigWig and BigBed: enabling browsing of large distributed datasets.

    PubMed

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  16. Lepton asymmetry, neutrino spectral distortions, and big bang nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Grohs, E.; Fuller, George M.; Kishimoto, C. T.; Paris, Mark W.

    2017-03-01

    We calculate Boltzmann neutrino energy transport with self-consistently coupled nuclear reactions through the weak-decoupling-nucleosynthesis epoch in an early universe with significant lepton numbers. We find that the presence of lepton asymmetry enhances processes which give rise to nonthermal neutrino spectral distortions. Our results reveal how asymmetries in energy and entropy density uniquely evolve for different transport processes and neutrino flavors. The enhanced distortions in the neutrino spectra alter the expected big bang nucleosynthesis light element abundance yields relative to those in the standard Fermi-Dirac neutrino distribution cases. These yields, sensitive to the shapes of the neutrino energy spectra, are also sensitive to the phasing of the growth of distortions and entropy flow with time/scale factor. We analyze these issues and speculate on new sensitivity limits of deuterium and helium to lepton number.

  17. Brief report: How short is too short? An ultra-brief measure of the big-five personality domains implicates "agreeableness" as a risk for all-cause mortality.

    PubMed

    Chapman, Benjamin P; Elliot, Ari J

    2017-08-01

    Controversy exists over the use of brief Big Five scales in health studies. We investigated links between an ultra-brief measure, the Big Five Inventory-10, and mortality in the General Social Survey. The Agreeableness scale was associated with elevated mortality risk (hazard ratio = 1.26, p = .017). This effect was attributable to the reversed-scored item "Tends to find fault with others," so that greater fault-finding predicted lower mortality risk. The Conscientiousness scale approached meta-analytic estimates, which were not precise enough for significance. Those seeking Big Five measurement in health studies should be aware that the Big Five Inventory-10 may yield unusual results.

  18. Big Challenges and Big Opportunities: The Power of "Big Ideas" to Change Curriculum and the Culture of Teacher Planning

    ERIC Educational Resources Information Center

    Hurst, Chris

    2014-01-01

    Mathematical knowledge of pre-service teachers is currently "under the microscope" and the subject of research. This paper proposes a different approach to teacher content knowledge based on the "big ideas" of mathematics and the connections that exist within and between them. It is suggested that these "big ideas"…

  19. Countering misinformation concerning big sagebrush

    Treesearch

    Bruce L Welch; Craig Criddle

    2003-01-01

    This paper examines the scientific merits of eight axioms of range or vegetative management pertaining to big sagebrush. These axioms are: (1) Wyoming big sagebrush (Artemisia tridentata ssp. wyomingensis) does not naturally exceed 10 percent canopy cover and mountain big sagebrush (A. t. ssp. vaseyana) does not naturally exceed 20 percent canopy...

  20. BigNeuron dataset V.0.0

    DOE Data Explorer

    Ramanathan, Arvind

    2016-01-01

    The cleaned bench testing reconstructions for the gold166 datasets have been put online at github https://github.com/BigNeuron/Events-and-News/wiki/BigNeuron-Events-and-News https://github.com/BigNeuron/Data/releases/tag/gold166_bt_v1.0 The respective image datasets were released a while ago from other sites (major pointer is available at github as well https://github.com/BigNeuron/Data/releases/tag/Gold166_v1 but since the files were big, the actual downloading was distributed at 3 continents separately)

  1. Big data - a 21st century science Maginot Line? No-boundary thinking: shifting from the big data paradigm.

    PubMed

    Huang, Xiuzhen; Jennings, Steven F; Bruce, Barry; Buchan, Alison; Cai, Liming; Chen, Pengyin; Cramer, Carole L; Guan, Weihua; Hilgert, Uwe Kk; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Donald F; Nanduri, Bindu; Perkins, Andy; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Walker, Karl; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhang, Yu; Zhao, Zhongming; Moore, Jason H

    2015-01-01

    Whether your interests lie in scientific arenas, the corporate world, or in government, you have certainly heard the praises of big data: Big data will give you new insights, allow you to become more efficient, and/or will solve your problems. While big data has had some outstanding successes, many are now beginning to see that it is not the Silver Bullet that it has been touted to be. Here our main concern is the overall impact of big data; the current manifestation of big data is constructing a Maginot Line in science in the 21st century. Big data is not "lots of data" as a phenomena anymore; The big data paradigm is putting the spirit of the Maginot Line into lots of data. Big data overall is disconnecting researchers and science challenges. We propose No-Boundary Thinking (NBT), applying no-boundary thinking in problem defining to address science challenges.

  2. Challenges of Big Data Analysis.

    PubMed

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  3. Challenges of Big Data Analysis

    PubMed Central

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-01-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions. PMID:25419469

  4. The Companion Dog as a Model for the Longevity Dividend

    PubMed Central

    Creevy, Kate E.; Austad, Steven N.; Hoffman, Jessica M.; O’Neill, Dan G.; Promislow, Daniel E.L.

    2016-01-01

    The companion dog is the most phenotypically diverse species on the planet. This enormous variability between breeds extends not only to morphology and behavior but also to longevity and the disorders that affect dogs. There are remarkable overlaps and similarities between the human and canine species. Dogs closely share our human environment, including its many risk factors, and the veterinary infrastructure to manage health in dogs is second only to the medical infrastructure for humans. Distinct breed-based health profiles, along with their well-developed health record system and high overlap with the human environment, make the companion dog an exceptional model to improve understanding of the physiological, social, and economic impacts of the longevity dividend (LD). In this review, we describe what is already known about age-specific patterns of morbidity and mortality in companion dogs, and then explore whether this existing evidence supports the LD. We also discuss some potential limitations to using dogs as models of aging, including the fact that many dogs are euthanized before they have lived out their natural life span. Overall, we conclude that the companion dog offers high potential as a model system that will enable deeper research into the LD than is otherwise possible. PMID:26729759

  5. Big Data and Chemical Education

    ERIC Educational Resources Information Center

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  6. Big data in fashion industry

    NASA Astrophysics Data System (ADS)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  7. The Big6 Collection: The Best of the Big6 Newsletter.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.; Berkowitz, Robert E.

    The Big6 is a complete approach to implementing meaningful learning and teaching of information and technology skills, essential for 21st century living. Including in-depth articles, practical tips, and explanations, this book offers a varied range of material about students and teachers, the Big6, and curriculum. The book is divided into 10 main…

  8. Big Data Bioinformatics

    PubMed Central

    GREENE, CASEY S.; TAN, JIE; UNG, MATTHEW; MOORE, JASON H.; CHENG, CHAO

    2017-01-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the “big data” era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both “machine learning” algorithms as well as “unsupervised” and “supervised” examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. PMID:27908398

  9. Big Data Bioinformatics

    PubMed Central

    GREENE, CASEY S.; TAN, JIE; UNG, MATTHEW; MOORE, JASON H.; CHENG, CHAO

    2017-01-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the “big data” era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both “machine learning” algorithms as well as “unsupervised” and “supervised” examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. PMID:24799088

  10. Big data bioinformatics.

    PubMed

    Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao

    2014-12-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. © 2014 Wiley Periodicals, Inc.

  11. Changing the personality of a face: Perceived Big Two and Big Five personality factors modeled in real photographs.

    PubMed

    Walker, Mirella; Vetter, Thomas

    2016-04-01

    General, spontaneous evaluations of strangers based on their faces have been shown to reflect judgments of these persons' intention and ability to harm. These evaluations can be mapped onto a 2D space defined by the dimensions trustworthiness (intention) and dominance (ability). Here we go beyond general evaluations and focus on more specific personality judgments derived from the Big Two and Big Five personality concepts. In particular, we investigate whether Big Two/Big Five personality judgments can be mapped onto the 2D space defined by the dimensions trustworthiness and dominance. Results indicate that judgments of the Big Two personality dimensions almost perfectly map onto the 2D space. In contrast, at least 3 of the Big Five dimensions (i.e., neuroticism, extraversion, and conscientiousness) go beyond the 2D space, indicating that additional dimensions are necessary to describe more specific face-based personality judgments accurately. Building on this evidence, we model the Big Two/Big Five personality dimensions in real facial photographs. Results from 2 validation studies show that the Big Two/Big Five are perceived reliably across different samples of faces and participants. Moreover, results reveal that participants differentiate reliably between the different Big Two/Big Five dimensions. Importantly, this high level of agreement and differentiation in personality judgments from faces likely creates a subjective reality which may have serious consequences for those being perceived-notably, these consequences ensue because the subjective reality is socially shared, irrespective of the judgments' validity. The methodological approach introduced here might prove useful in various psychological disciplines. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Treesearch

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  13. The Big Bang Theory

    ScienceCinema

    Lincoln, Don

    2018-01-16

    The Big Bang is the name of the most respected theory of the creation of the universe. Basically, the theory says that the universe was once smaller and denser and has been expending for eons. One common misconception is that the Big Bang theory says something about the instant that set the expansion into motion, however this isn’t true. In this video, Fermilab’s Dr. Don Lincoln tells about the Big Bang theory and sketches some speculative ideas about what caused the universe to come into existence.

  14. The Big Bang Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lincoln, Don

    The Big Bang is the name of the most respected theory of the creation of the universe. Basically, the theory says that the universe was once smaller and denser and has been expending for eons. One common misconception is that the Big Bang theory says something about the instant that set the expansion into motion, however this isn’t true. In this video, Fermilab’s Dr. Don Lincoln tells about the Big Bang theory and sketches some speculative ideas about what caused the universe to come into existence.

  15. Seeding considerations in restoring big sagebrush habitat

    Treesearch

    Scott M. Lambert

    2005-01-01

    This paper describes methods of managing or seeding to restore big sagebrush communities for wildlife habitat. The focus is on three big sagebrush subspecies, Wyoming big sagebrush (Artemisia tridentata ssp. wyomingensis), basin big sagebrush (Artemisia tridentata ssp. tridentata), and mountain...

  16. ARTIST CONCEPT - BIG JOE

    NASA Image and Video Library

    1963-09-01

    S63-19317 (October 1963) --- Pen and ink views of comparative arrangements of several capsules including the existing "Big Joe" design, the compromise "Big Joe" design, and the "Little Joe". All capsule designs are labeled and include dimensions. Photo credit: NASA

  17. Big Society, Big Deal?

    ERIC Educational Resources Information Center

    Thomson, Alastair

    2011-01-01

    Political leaders like to put forward guiding ideas or themes which pull their individual decisions into a broader narrative. For John Major it was Back to Basics, for Tony Blair it was the Third Way and for David Cameron it is the Big Society. While Mr. Blair relied on Lord Giddens to add intellectual weight to his idea, Mr. Cameron's legacy idea…

  18. Lepton asymmetry, neutrino spectral distortions, and big bang nucleosynthesis

    DOE PAGES

    Grohs, E.; Fuller, George M.; Kishimoto, C. T.; ...

    2017-03-03

    In this paper, we calculate Boltzmann neutrino energy transport with self-consistently coupled nuclear reactions through the weak-decoupling-nucleosynthesis epoch in an early universe with significant lepton numbers. We find that the presence of lepton asymmetry enhances processes which give rise to nonthermal neutrino spectral distortions. Our results reveal how asymmetries in energy and entropy density uniquely evolve for different transport processes and neutrino flavors. The enhanced distortions in the neutrino spectra alter the expected big bang nucleosynthesis light element abundance yields relative to those in the standard Fermi-Dirac neutrino distribution cases. These yields, sensitive to the shapes of the neutrino energymore » spectra, are also sensitive to the phasing of the growth of distortions and entropy flow with time/scale factor. Finally, we analyze these issues and speculate on new sensitivity limits of deuterium and helium to lepton number.« less

  19. Lepton asymmetry, neutrino spectral distortions, and big bang nucleosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grohs, E.; Fuller, George M.; Kishimoto, C. T.

    In this paper, we calculate Boltzmann neutrino energy transport with self-consistently coupled nuclear reactions through the weak-decoupling-nucleosynthesis epoch in an early universe with significant lepton numbers. We find that the presence of lepton asymmetry enhances processes which give rise to nonthermal neutrino spectral distortions. Our results reveal how asymmetries in energy and entropy density uniquely evolve for different transport processes and neutrino flavors. The enhanced distortions in the neutrino spectra alter the expected big bang nucleosynthesis light element abundance yields relative to those in the standard Fermi-Dirac neutrino distribution cases. These yields, sensitive to the shapes of the neutrino energymore » spectra, are also sensitive to the phasing of the growth of distortions and entropy flow with time/scale factor. Finally, we analyze these issues and speculate on new sensitivity limits of deuterium and helium to lepton number.« less

  20. Big Data Analytics in Medicine and Healthcare.

    PubMed

    Ristevski, Blagoj; Chen, Ming

    2018-05-10

    This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.

  1. The Big Bang Singularity

    NASA Astrophysics Data System (ADS)

    Ling, Eric

    The big bang theory is a model of the universe which makes the striking prediction that the universe began a finite amount of time in the past at the so called "Big Bang singularity." We explore the physical and mathematical justification of this surprising result. After laying down the framework of the universe as a spacetime manifold, we combine physical observations with global symmetrical assumptions to deduce the FRW cosmological models which predict a big bang singularity. Next we prove a couple theorems due to Stephen Hawking which show that the big bang singularity exists even if one removes the global symmetrical assumptions. Lastly, we investigate the conditions one needs to impose on a spacetime if one wishes to avoid a singularity. The ideas and concepts used here to study spacetimes are similar to those used to study Riemannian manifolds, therefore we compare and contrast the two geometries throughout.

  2. Medical big data: promise and challenges.

    PubMed

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  3. Medical big data: promise and challenges

    PubMed Central

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-01-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology. PMID:28392994

  4. Factors determining yield and quality of illicit indoor cannabis (Cannabis spp.) production.

    PubMed

    Vanhove, Wouter; Van Damme, Patrick; Meert, Natalie

    2011-10-10

    Judiciary currently faces difficulties in adequately estimating the yield of illicit indoor cannabis plantations. The latter data is required in penalization which is based on the profits gained. A full factorial experiment in which two overhead light intensities, two plant densities and four varieties were combined in the indoor cultivation of cannabis (Cannabis spp.) was used to reveal cannabis drug yield and quality under each of the factor combinations. Highest yield was found for the Super Skunk and Big Bud varieties which also exhibited the highest concentrations of Δ(9)-tetrahydrocannabinol (THC). Results show that plant density and light intensity are additive factors whereas the variety factor significantly interacts with both plant density and light intensity factors. Adequate estimations of yield of illicit, indoor cannabis plantations can only be made if upon seizure all factors considered in this study are accounted for. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Measuring the Promise of Big Data Syllabi

    ERIC Educational Resources Information Center

    Friedman, Alon

    2018-01-01

    Growing interest in Big Data is leading industries, academics and governments to accelerate Big Data research. However, how teachers should teach Big Data has not been fully examined. This article suggests criteria for redesigning Big Data syllabi in public and private degree-awarding higher education establishments. The author conducted a survey…

  6. How Research on Human Progeroid and Antigeroid Syndromes Can Contribute to the Longevity Dividend Initiative

    PubMed Central

    Hisama, Fuki M.; Oshima, Junko; Martin, George M.

    2016-01-01

    Although translational applications derived from research on basic mechanisms of aging are likely to enhance health spans and life spans for most of us (the longevity dividend), there will remain subsets of individuals with special vulnerabilities. Medical genetics is a discipline that describes such “private” patterns of aging and can reveal underlying mechanisms, many of which support genomic instability as a major mechanism of aging. We review examples of three classes of informative disorders: “segmental progeroid syndromes” (those that appear to accelerate multiple features of aging), “unimodal progeroid syndromes” (those that impact on a single disorder of aging), and “unimodal antigeroid syndromes,” variants that provide enhanced protection against specific disorders of aging; we urge our colleagues to expand our meager research efforts on the latter, including ancillary somatic cell genetic approaches. PMID:26931459

  7. The big bang

    NASA Astrophysics Data System (ADS)

    Silk, Joseph

    Our universe was born billions of years ago in a hot, violent explosion of elementary particles and radiation - the big bang. What do we know about this ultimate moment of creation, and how do we know it? Drawing upon the latest theories and technology, this new edition of The big bang, is a sweeping, lucid account of the event that set the universe in motion. Joseph Silk begins his story with the first microseconds of the big bang, on through the evolution of stars, galaxies, clusters of galaxies, quasars, and into the distant future of our universe. He also explores the fascinating evidence for the big bang model and recounts the history of cosmological speculation. Revised and updated, this new edition features all the most recent astronomical advances, including: Photos and measurements from the Hubble Space Telescope, Cosmic Background Explorer Satellite (COBE), and Infrared Space Observatory; the latest estimates of the age of the universe; new ideas in string and superstring theory; recent experiments on neutrino detection; new theories about the presence of dark matter in galaxies; new developments in the theory of the formation and evolution of galaxies; the latest ideas about black holes, worm holes, quantum foam, and multiple universes.

  8. Big-BOE: Fusing Spanish Official Gazette with Big Data Technology.

    PubMed

    Basanta-Val, Pablo; Sánchez-Fernández, Luis

    2018-06-01

    The proliferation of new data sources, stemmed from the adoption of open-data schemes, in combination with an increasing computing capacity causes the inception of new type of analytics that process Internet of things with low-cost engines to speed up data processing using parallel computing. In this context, the article presents an initiative, called BIG-Boletín Oficial del Estado (BOE), designed to process the Spanish official government gazette (BOE) with state-of-the-art processing engines, to reduce computation time and to offer additional speed up for big data analysts. The goal of including a big data infrastructure is to be able to process different BOE documents in parallel with specific analytics, to search for several issues in different documents. The application infrastructure processing engine is described from an architectural perspective and from performance, showing evidence on how this type of infrastructure improves the performance of different types of simple analytics as several machines cooperate.

  9. Big Data's Role in Precision Public Health.

    PubMed

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  10. Antigravity and the big crunch/big bang transition

    NASA Astrophysics Data System (ADS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-08-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  11. Restoring Wyoming big sagebrush

    Treesearch

    Cindy R. Lysne

    2005-01-01

    The widespread occurrence of big sagebrush can be attributed to many adaptive features. Big sagebrush plays an essential role in its communities by providing wildlife habitat, modifying local environmental conditions, and facilitating the reestablishment of native herbs. Currently, however, many sagebrush steppe communities are highly fragmented. As a result, restoring...

  12. Exploiting big data for critical care research.

    PubMed

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  13. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig) proteins.

    PubMed

    Raman, Rajeev; Rajanikanth, V; Palaniappan, Raghavan U M; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P; Sharma, Yogendra; Chang, Yung-Fu

    2010-12-29

    Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th) (Lig A9) and 10(th) repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  14. Metal atom dynamics in superbulky metallocenes: a comparison of (Cp(BIG))2Sn and (Cp(BIG))2Eu.

    PubMed

    Harder, Sjoerd; Naglav, Dominik; Schwerdtfeger, Peter; Nowik, Israel; Herber, Rolfe H

    2014-02-17

    Cp(BIG)2Sn (Cp(BIG) = (4-n-Bu-C6H4)5cyclopentadienyl), prepared by reaction of 2 equiv of Cp(BIG)Na with SnCl2, crystallized isomorphous to other known metallocenes with this ligand (Ca, Sr, Ba, Sm, Eu, Yb). Similarly, it shows perfect linearity, C-H···C(π) bonding between the Cp(BIG) rings and out-of-plane bending of the aryl substituents toward the metal. Whereas all other Cp(BIG)2M complexes show large disorder in the metal position, the Sn atom in Cp(BIG)2Sn is perfectly ordered. In contrast, (119)Sn and (151)Eu Mößbauer investigations on the corresponding Cp(BIG)2M metallocenes show that Sn(II) is more dynamic and loosely bound than Eu(II). The large displacement factors in the group 2 and especially in the lanthanide(II) metallocenes Cp(BIG)2M can be explained by static metal disorder in a plane parallel to the Cp(BIG) rings. Despite parallel Cp(BIG) rings, these metallocenes have a nonlinear Cpcenter-M-Cpcenter geometry. This is explained by an ionic model in which metal atoms are polarized by the negatively charged Cp rings. The extent of nonlinearity is in line with trends found in M(2+) ion polarizabilities. The range of known calculated dipole polarizabilities at the Douglas-Kroll CCSD(T) level was extended with values (atomic units) for Sn(2+) 15.35, Sm(2+)(4f(6) (7)F) 9.82, Eu(2+)(4f(7) (8)S) 8.99, and Yb(2+)(4f(14) (1)S) 6.55. This polarizability model cannot be applied to predominantly covalently bound Cp(BIG)2Sn, which shows a perfectly ordered structure. The bent geometry of Cp*2Sn should therefore not be explained by metal polarizability but is due to van der Waals Cp*···Cp* attraction and (to some extent) to a small p-character component in the Sn lone pair.

  15. Big Joe Capsule Assembly Activities

    NASA Image and Video Library

    1959-08-01

    Big Joe Capsule Assembly Activities in 1959 at NASA Glenn Research Center (formerly NASA Lewis). Big Joe was an Atlas missile that successfully launched a boilerplate model of the Mercury capsule on September 9, 1959.

  16. Urgent Call for Nursing Big Data.

    PubMed

    Delaney, Connie W

    2016-01-01

    The purpose of this panel is to expand internationally a National Action Plan for sharable and comparable nursing data for quality improvement and big data science. There is an urgent need to assure that nursing has sharable and comparable data for quality improvement and big data science. A national collaborative - Nursing Knowledge and Big Data Science includes multi-stakeholder groups focused on a National Action Plan toward implementing and using sharable and comparable nursing big data. Panelists will share accomplishments and future plans with an eye toward international collaboration. This presentation is suitable for any audience attending the NI2016 conference.

  17. bigSCale: an analytical framework for big-scale single-cell data.

    PubMed

    Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger

    2018-06-01

    Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor

  18. [Big data in medicine and healthcare].

    PubMed

    Rüping, Stefan

    2015-08-01

    Healthcare is one of the business fields with the highest Big Data potential. According to the prevailing definition, Big Data refers to the fact that data today is often too large and heterogeneous and changes too quickly to be stored, processed, and transformed into value by previous technologies. The technological trends drive Big Data: business processes are more and more executed electronically, consumers produce more and more data themselves - e.g. in social networks - and finally ever increasing digitalization. Currently, several new trends towards new data sources and innovative data analysis appear in medicine and healthcare. From the research perspective, omics-research is one clear Big Data topic. In practice, the electronic health records, free open data and the "quantified self" offer new perspectives for data analytics. Regarding analytics, significant advances have been made in the information extraction from text data, which unlocks a lot of data from clinical documentation for analytics purposes. At the same time, medicine and healthcare is lagging behind in the adoption of Big Data approaches. This can be traced to particular problems regarding data complexity and organizational, legal, and ethical challenges. The growing uptake of Big Data in general and first best-practice examples in medicine and healthcare in particular, indicate that innovative solutions will be coming. This paper gives an overview of the potentials of Big Data in medicine and healthcare.

  19. High School Students as Mentors: Findings from the Big Brothers Big Sisters School-Based Mentoring Impact Study

    ERIC Educational Resources Information Center

    Herrera, Carla; Kauh, Tina J.; Cooney, Siobhan M.; Grossman, Jean Baldwin; McMaken, Jennifer

    2008-01-01

    High schools have recently become a popular source of mentors for school-based mentoring (SBM) programs. The high school Bigs program of Big Brothers Big Sisters of America, for example, currently involves close to 50,000 high-school-aged mentors across the country. While the use of these young mentors has several potential advantages, their age…

  20. Making big sense from big data in toxicology by read-across.

    PubMed

    Hartung, Thomas

    2016-01-01

    Modern information technologies have made big data available in safety sciences, i.e., extremely large data sets that may be analyzed only computationally to reveal patterns, trends and associations. This happens by (1) compilation of large sets of existing data, e.g., as a result of the European REACH regulation, (2) the use of omics technologies and (3) systematic robotized testing in a high-throughput manner. All three approaches and some other high-content technologies leave us with big data--the challenge is now to make big sense of these data. Read-across, i.e., the local similarity-based intrapolation of properties, is gaining momentum with increasing data availability and consensus on how to process and report it. It is predominantly applied to in vivo test data as a gap-filling approach, but can similarly complement other incomplete datasets. Big data are first of all repositories for finding similar substances and ensure that the available data is fully exploited. High-content and high-throughput approaches similarly require focusing on clusters, in this case formed by underlying mechanisms such as pathways of toxicity. The closely connected properties, i.e., structural and biological similarity, create the confidence needed for predictions of toxic properties. Here, a new web-based tool under development called REACH-across, which aims to support and automate structure-based read-across, is presented among others.

  1. [Big data in official statistics].

    PubMed

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  2. Considerations on Geospatial Big Data

    NASA Astrophysics Data System (ADS)

    LIU, Zhen; GUO, Huadong; WANG, Changlin

    2016-11-01

    Geospatial data, as a significant portion of big data, has recently gained the full attention of researchers. However, few researchers focus on the evolution of geospatial data and its scientific research methodologies. When entering into the big data era, fully understanding the changing research paradigm associated with geospatial data will definitely benefit future research on big data. In this paper, we look deep into these issues by examining the components and features of geospatial big data, reviewing relevant scientific research methodologies, and examining the evolving pattern of geospatial data in the scope of the four ‘science paradigms’. This paper proposes that geospatial big data has significantly shifted the scientific research methodology from ‘hypothesis to data’ to ‘data to questions’ and it is important to explore the generality of growing geospatial data ‘from bottom to top’. Particularly, four research areas that mostly reflect data-driven geospatial research are proposed: spatial correlation, spatial analytics, spatial visualization, and scientific knowledge discovery. It is also pointed out that privacy and quality issues of geospatial data may require more attention in the future. Also, some challenges and thoughts are raised for future discussion.

  3. Big-Leaf Mahogany on CITES Appendix II: Big Challenge, Big Opportunity

    Treesearch

    JAMES GROGAN; PAULO BARRETO

    2005-01-01

    On 15 November 2003, big-leaf mahogany (Swietenia macrophylla King, Meliaceae), the most valuable widely traded Neotropical timber tree, gained strengthened regulatory protection from its listing on Appendix II of the Convention on International Trade in Endangered Species ofWild Fauna and Flora (CITES). CITES is a United Nations-chartered agreement signed by 164...

  4. Big Data in Medicine is Driving Big Changes

    PubMed Central

    Verspoor, K.

    2014-01-01

    Summary Objectives To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies. PMID:25123716

  5. Health Informatics Scientists' Perception About Big Data Technology.

    PubMed

    Minou, John; Routsis, Fotios; Gallos, Parisis; Mantas, John

    2017-01-01

    The aim of this paper is to present the perceptions of the Health Informatics Scientists about the Big Data Technology in Healthcare. An empirical study was conducted among 46 scientists to assess their knowledge about the Big Data Technology and their perceptions about using this technology in healthcare. Based on the study findings, 86.7% of the scientists had knowledge of Big data Technology. Furthermore, 59.1% of the scientists believed that Big Data Technology refers to structured data. Additionally, 100% of the population believed that Big Data Technology can be implemented in Healthcare. Finally, the majority does not know any cases of use of Big Data Technology in Greece while 57,8% of the them mentioned that they knew use cases of the Big Data Technology abroad.

  6. Harnessing the Power of Big Data to Improve Graduate Medical Education: Big Idea or Bust?

    PubMed

    Arora, Vineet M

    2018-06-01

    With the advent of electronic medical records (EMRs) fueling the rise of big data, the use of predictive analytics, machine learning, and artificial intelligence are touted as transformational tools to improve clinical care. While major investments are being made in using big data to transform health care delivery, little effort has been directed toward exploiting big data to improve graduate medical education (GME). Because our current system relies on faculty observations of competence, it is not unreasonable to ask whether big data in the form of clinical EMRs and other novel data sources can answer questions of importance in GME such as when is a resident ready for independent practice.The timing is ripe for such a transformation. A recent National Academy of Medicine report called for reforms to how GME is delivered and financed. While many agree on the need to ensure that GME meets our nation's health needs, there is little consensus on how to measure the performance of GME in meeting this goal. During a recent workshop at the National Academy of Medicine on GME outcomes and metrics in October 2017, a key theme emerged: Big data holds great promise to inform GME performance at individual, institutional, and national levels. In this Invited Commentary, several examples are presented, such as using big data to inform clinical experience and provide clinically meaningful data to trainees, and using novel data sources, including ambient data, to better measure the quality of GME training.

  7. A SWOT Analysis of Big Data

    ERIC Educational Resources Information Center

    Ahmadi, Mohammad; Dileepan, Parthasarati; Wheatley, Kathleen K.

    2016-01-01

    This is the decade of data analytics and big data, but not everyone agrees with the definition of big data. Some researchers see it as the future of data analysis, while others consider it as hype and foresee its demise in the near future. No matter how it is defined, big data for the time being is having its glory moment. The most important…

  8. A survey of big data research

    PubMed Central

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang, Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions. PMID:26504265

  9. Software Architecture for Big Data Systems

    DTIC Science & Technology

    2014-03-27

    Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University Software Architecture for Big Data Systems...AND SUBTITLE Software Architecture for Big Data Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...ih - . Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University WHAT IS BIG DATA ? FROM A SOFTWARE

  10. 78 FR 3911 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-17

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N259; FXRS1265030000-134-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive... significant impact (FONSI) for the environmental assessment (EA) for Big Stone National Wildlife Refuge...

  11. Big Domains Are Novel Ca2+-Binding Modules: Evidences from Big Domains of Leptospira Immunoglobulin-Like (Lig) Proteins

    PubMed Central

    Palaniappan, Raghavan U. M.; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P.; Sharma, Yogendra; Chang, Yung-Fu

    2010-01-01

    Background Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca2+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca2+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. Principal Findings We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9th (Lig A9) and 10th repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca2+ with dissociation constants of 2–4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. Conclusions We demonstrate that the Lig are Ca2+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca2+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca2+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca2+ binding. PMID:21206924

  12. Big sagebrush seed bank densities following wildfires

    USDA-ARS?s Scientific Manuscript database

    Big sagebrush (Artemisia spp.) is a critical shrub to many wildlife species including sage grouse (Centrocercus urophasianus), mule deer (Odocoileus hemionus), and pygmy rabbit (Brachylagus idahoensis). Big sagebrush is killed by wildfires and big sagebrush seed is generally short-lived and do not s...

  13. Epidemiology in wonderland: Big Data and precision medicine.

    PubMed

    Saracci, Rodolfo

    2018-03-01

    Big Data and precision medicine, two major contemporary challenges for epidemiology, are critically examined from two different angles. In Part 1 Big Data collected for research purposes (Big research Data) and Big Data used for research although collected for other primary purposes (Big secondary Data) are discussed in the light of the fundamental common requirement of data validity, prevailing over "bigness". Precision medicine is treated developing the key point that high relative risks are as a rule required to make a variable or combination of variables suitable for prediction of disease occurrence, outcome or response to treatment; the commercial proliferation of allegedly predictive tests of unknown or poor validity is commented. Part 2 proposes a "wise epidemiology" approach to: (a) choosing in a context imprinted by Big Data and precision medicine-epidemiological research projects actually relevant to population health, (b) training epidemiologists, (c) investigating the impact on clinical practices and doctor-patient relation of the influx of Big Data and computerized medicine and (d) clarifying whether today "health" may be redefined-as some maintain in purely technological terms.

  14. Big Data and Analytics in Healthcare.

    PubMed

    Tan, S S-L; Gao, G; Koch, S

    2015-01-01

    This editorial is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". The amount of data being generated in the healthcare industry is growing at a rapid rate. This has generated immense interest in leveraging the availability of healthcare data (and "big data") to improve health outcomes and reduce costs. However, the nature of healthcare data, and especially big data, presents unique challenges in processing and analyzing big data in healthcare. This Focus Theme aims to disseminate some novel approaches to address these challenges. More specifically, approaches ranging from efficient methods of processing large clinical data to predictive models that could generate better predictions from healthcare data are presented.

  15. "Big data" in economic history.

    PubMed

    Gutmann, Myron P; Merchant, Emily Klancher; Roberts, Evan

    2018-03-01

    Big data is an exciting prospect for the field of economic history, which has long depended on the acquisition, keying, and cleaning of scarce numerical information about the past. This article examines two areas in which economic historians are already using big data - population and environment - discussing ways in which increased frequency of observation, denser samples, and smaller geographic units allow us to analyze the past with greater precision and often to track individuals, places, and phenomena across time. We also explore promising new sources of big data: organically created economic data, high resolution images, and textual corpora.

  16. Big Data and Ambulatory Care

    PubMed Central

    Thorpe, Jane Hyatt; Gray, Elizabeth Alexandra

    2015-01-01

    Big data is heralded as having the potential to revolutionize health care by making large amounts of data available to support care delivery, population health, and patient engagement. Critics argue that big data's transformative potential is inhibited by privacy requirements that restrict health information exchange. However, there are a variety of permissible activities involving use and disclosure of patient information that support care delivery and management. This article presents an overview of the legal framework governing health information, dispels misconceptions about privacy regulations, and highlights how ambulatory care providers in particular can maximize the utility of big data to improve care. PMID:25401945

  17. Big Data Knowledge in Global Health Education.

    PubMed

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  18. Big data for bipolar disorder.

    PubMed

    Monteith, Scott; Glenn, Tasha; Geddes, John; Whybrow, Peter C; Bauer, Michael

    2016-12-01

    The delivery of psychiatric care is changing with a new emphasis on integrated care, preventative measures, population health, and the biological basis of disease. Fundamental to this transformation are big data and advances in the ability to analyze these data. The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events. The primary sources of big data today are electronic medical records (EMR), claims, and registry data from providers and payers. In the near future, data created by patients from active monitoring, passive monitoring of Internet and smartphone activities, and from sensors may be integrated with the EMR. Diverse data sources from outside of medicine, such as government financial data, will be linked for research. Over the long term, genetic and imaging data will be integrated with the EMR, and there will be more emphasis on predictive models. Many technical challenges remain when analyzing big data that relates to size, heterogeneity, complexity, and unstructured text data in the EMR. Human judgement and subject matter expertise are critical parts of big data analysis, and the active participation of psychiatrists is needed throughout the analytical process.

  19. GEOSS: Addressing Big Data Challenges

    NASA Astrophysics Data System (ADS)

    Nativi, S.; Craglia, M.; Ochiai, O.

    2014-12-01

    In the sector of Earth Observation, the explosion of data is due to many factors including: new satellite constellations, the increased capabilities of sensor technologies, social media, crowdsourcing, and the need for multidisciplinary and collaborative research to face Global Changes. In this area, there are many expectations and concerns about Big Data. Vendors have attempted to use this term for their commercial purposes. It is necessary to understand whether Big Data is a radical shift or an incremental change for the existing digital infrastructures. This presentation tries to explore and discuss the impact of Big Data challenges and new capabilities on the Global Earth Observation System of Systems (GEOSS) and particularly on its common digital infrastructure called GCI. GEOSS is a global and flexible network of content providers allowing decision makers to access an extraordinary range of data and information at their desk. The impact of the Big Data dimensionalities (commonly known as 'V' axes: volume, variety, velocity, veracity, visualization) on GEOSS is discussed. The main solutions and experimentation developed by GEOSS along these axes are introduced and analyzed. GEOSS is a pioneering framework for global and multidisciplinary data sharing in the Earth Observation realm; its experience on Big Data is valuable for the many lessons learned.

  20. Big Questions: Missing Antimatter

    ScienceCinema

    Lincoln, Don

    2018-06-08

    Einstein's equation E = mc2 is often said to mean that energy can be converted into matter. More accurately, energy can be converted to matter and antimatter. During the first moments of the Big Bang, the universe was smaller, hotter and energy was everywhere. As the universe expanded and cooled, the energy converted into matter and antimatter. According to our best understanding, these two substances should have been created in equal quantities. However when we look out into the cosmos we see only matter and no antimatter. The absence of antimatter is one of the Big Mysteries of modern physics. In this video, Fermilab's Dr. Don Lincoln explains the problem, although doesn't answer it. The answer, as in all Big Mysteries, is still unknown and one of the leading research topics of contemporary science.

  1. Big data in biomedicine.

    PubMed

    Costa, Fabricio F

    2014-04-01

    The increasing availability and growth rate of biomedical information, also known as 'big data', provides an opportunity for future personalized medicine programs that will significantly improve patient care. Recent advances in information technology (IT) applied to biomedicine are changing the landscape of privacy and personal information, with patients getting more control of their health information. Conceivably, big data analytics is already impacting health decisions and patient care; however, specific challenges need to be addressed to integrate current discoveries into medical practice. In this article, I will discuss the major breakthroughs achieved in combining omics and clinical health data in terms of their application to personalized medicine. I will also review the challenges associated with using big data in biomedicine and translational science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Big Data’s Role in Precision Public Health

    PubMed Central

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091

  3. Big data in forensic science and medicine.

    PubMed

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  4. Big Data and Perioperative Nursing.

    PubMed

    Westra, Bonnie L; Peterson, Jessica J

    2016-10-01

    Big data are large volumes of digital data that can be collected from disparate sources and are challenging to analyze. These data are often described with the five "Vs": volume, velocity, variety, veracity, and value. Perioperative nurses contribute to big data through documentation in the electronic health record during routine surgical care, and these data have implications for clinical decision making, administrative decisions, quality improvement, and big data science. This article explores methods to improve the quality of perioperative nursing data and provides examples of how these data can be combined with broader nursing data for quality improvement. We also discuss a national action plan for nursing knowledge and big data science and how perioperative nurses can engage in collaborative actions to transform health care. Standardized perioperative nursing data has the potential to affect care far beyond the original patient. Copyright © 2016 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  5. Modeling in Big Data Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Szymczak, Samantha; Gunning, Dave

    Human-Centered Big Data Research (HCBDR) is an area of work that focuses on the methodologies and research areas focused on understanding how humans interact with “big data”. In the context of this paper, we refer to “big data” in a holistic sense, including most (if not all) the dimensions defining the term, such as complexity, variety, velocity, veracity, etc. Simply put, big data requires us as researchers of to question and reconsider existing approaches, with the opportunity to illuminate new kinds of insights that were traditionally out of reach to humans. The purpose of this article is to summarize themore » discussions and ideas about the role of models in HCBDR at a recent workshop. Models, within the context of this paper, include both computational and conceptual mental models. As such, the discussions summarized in this article seek to understand the connection between these two categories of models.« less

  6. NASA's Big Data Task Force

    NASA Astrophysics Data System (ADS)

    Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.

    2017-12-01

    Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.

  7. Big Data Technologies

    PubMed Central

    Bellazzi, Riccardo; Dagliati, Arianna; Sacchi, Lucia; Segagni, Daniele

    2015-01-01

    The so-called big data revolution provides substantial opportunities to diabetes management. At least 3 important directions are currently of great interest. First, the integration of different sources of information, from primary and secondary care to administrative information, may allow depicting a novel view of patient’s care processes and of single patient’s behaviors, taking into account the multifaceted nature of chronic care. Second, the availability of novel diabetes technologies, able to gather large amounts of real-time data, requires the implementation of distributed platforms for data analysis and decision support. Finally, the inclusion of geographical and environmental information into such complex IT systems may further increase the capability of interpreting the data gathered and extract new knowledge from them. This article reviews the main concepts and definitions related to big data, it presents some efforts in health care, and discusses the potential role of big data in diabetes care. Finally, as an example, it describes the research efforts carried on in the MOSAIC project, funded by the European Commission. PMID:25910540

  8. The Berlin Inventory of Gambling behavior - Screening (BIG-S): Validation using a clinical sample.

    PubMed

    Wejbera, Martin; Müller, Kai W; Becker, Jan; Beutel, Manfred E

    2017-05-18

    Published diagnostic questionnaires for gambling disorder in German are either based on DSM-III criteria or focus on aspects other than life time prevalence. This study was designed to assess the usability of the DSM-IV criteria based Berlin Inventory of Gambling Behavior Screening tool in a clinical sample and adapt it to DSM-5 criteria. In a sample of 432 patients presenting for behavioral addiction assessment at the University Medical Center Mainz, we checked the screening tool's results against clinical diagnosis and compared a subsample of n=300 clinically diagnosed gambling disorder patients with a comparison group of n=132. The BIG-S produced a sensitivity of 99.7% and a specificity of 96.2%. The instrument's unidimensionality and the diagnostic improvements of DSM-5 criteria were verified by exploratory and confirmatory factor analysis as well as receiver operating characteristic analysis. The BIG-S is a reliable and valid screening tool for gambling disorder and demonstrated its concise and comprehensible operationalization of current DSM-5 criteria in a clinical setting.

  9. Traffic information computing platform for big data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  10. Quantum nature of the big bang.

    PubMed

    Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parampreet

    2006-04-14

    Some long-standing issues concerning the quantum nature of the big bang are resolved in the context of homogeneous isotropic models with a scalar field. Specifically, the known results on the resolution of the big-bang singularity in loop quantum cosmology are significantly extended as follows: (i) the scalar field is shown to serve as an internal clock, thereby providing a detailed realization of the "emergent time" idea; (ii) the physical Hilbert space, Dirac observables, and semiclassical states are constructed rigorously; (iii) the Hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce. Thanks to the nonperturbative, background independent methods, unlike in other approaches the quantum evolution is deterministic across the deep Planck regime.

  11. Mentoring in Schools: An Impact Study of Big Brothers Big Sisters School-Based Mentoring

    ERIC Educational Resources Information Center

    Herrera, Carla; Grossman, Jean Baldwin; Kauh, Tina J.; McMaken, Jennifer

    2011-01-01

    This random assignment impact study of Big Brothers Big Sisters School-Based Mentoring involved 1,139 9- to 16-year-old students in 10 cities nationwide. Youth were randomly assigned to either a treatment group (receiving mentoring) or a control group (receiving no mentoring) and were followed for 1.5 school years. At the end of the first school…

  12. Big data processing in the cloud - Challenges and platforms

    NASA Astrophysics Data System (ADS)

    Zhelev, Svetoslav; Rozeva, Anna

    2017-12-01

    Choosing the appropriate architecture and technologies for a big data project is a difficult task, which requires extensive knowledge in both the problem domain and in the big data landscape. The paper analyzes the main big data architectures and the most widely implemented technologies used for processing and persisting big data. Clouds provide for dynamic resource scaling, which makes them a natural fit for big data applications. Basic cloud computing service models are presented. Two architectures for processing big data are discussed, Lambda and Kappa architectures. Technologies for big data persistence are presented and analyzed. Stream processing as the most important and difficult to manage is outlined. The paper highlights main advantages of cloud and potential problems.

  13. Ethics and Epistemology in Big Data Research.

    PubMed

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian; Ioannidis, John P A

    2017-12-01

    Biomedical innovation and translation are increasingly emphasizing research using "big data." The hope is that big data methods will both speed up research and make its results more applicable to "real-world" patients and health services. While big data research has been embraced by scientists, politicians, industry, and the public, numerous ethical, organizational, and technical/methodological concerns have also been raised. With respect to technical and methodological concerns, there is a view that these will be resolved through sophisticated information technologies, predictive algorithms, and data analysis techniques. While such advances will likely go some way towards resolving technical and methodological issues, we believe that the epistemological issues raised by big data research have important ethical implications and raise questions about the very possibility of big data research achieving its goals.

  14. Big Questions: Missing Antimatter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lincoln, Don

    2013-08-27

    Einstein's equation E = mc2 is often said to mean that energy can be converted into matter. More accurately, energy can be converted to matter and antimatter. During the first moments of the Big Bang, the universe was smaller, hotter and energy was everywhere. As the universe expanded and cooled, the energy converted into matter and antimatter. According to our best understanding, these two substances should have been created in equal quantities. However when we look out into the cosmos we see only matter and no antimatter. The absence of antimatter is one of the Big Mysteries of modern physics.more » In this video, Fermilab's Dr. Don Lincoln explains the problem, although doesn't answer it. The answer, as in all Big Mysteries, is still unknown and one of the leading research topics of contemporary science.« less

  15. A Great Year for the Big Blue Water

    NASA Astrophysics Data System (ADS)

    Leinen, M.

    2016-12-01

    It has been a great year for the big blue water. Last year the 'United_Nations' decided that it would focus on long time remain alright for the big blue water as one of its 'Millenium_Development_Goals'. This is new. In the past the big blue water was never even considered as a part of this world long time remain alright push. Also, last year the big blue water was added to the words of the group of world people paper #21 on cooling the air and things. It is hard to believe that the big blue water was not in the paper before because 70% of the world is covered by the big blue water! Many people at the group of world meeting were from our friends at 'AGU'.

  16. Real-Time Information Extraction from Big Data

    DTIC Science & Technology

    2015-10-01

    I N S T I T U T E F O R D E F E N S E A N A L Y S E S Real-Time Information Extraction from Big Data Robert M. Rolfe...Information Extraction from Big Data Jagdeep Shah Robert M. Rolfe Francisco L. Loaiza-Lemos October 7, 2015 I N S T I T U T E F O R D E F E N S E...AN A LY S E S Abstract We are drowning under the 3 Vs (volume, velocity and variety) of big data . Real-time information extraction from big

  17. Limnology of Big Lake, south-central Alaska, 1983-84

    USGS Publications Warehouse

    Woods, Paul F.

    1992-01-01

    The limnological characteristics and trophic state of Big Lake in south-central Alaska were determined from the results of an intensive study during 1983-84. The study was begun in response to concern over the potential for eutrophication of Big Lake, which has experienced substantial residential development and recreational use because of its proximity to Anchorage. The east and west basins of the 1,213 square-hectometer lake were each visited 36 times during the 2-year study to obtain a wide variety of physical, chemical, and biological data. During 1984, an estimate was made of the lake's annual primary production. Big Lake was classified as oligotrophic on the basis of its annual mean values for total phosphorus (9.5 micrograms per liter), total nitrogen (209 micrograms per liter), chlorophyll-a (2.5 micrograms per liter), secchi-disc transparency (6.3 meters), and its mean daily integral primary production of 81.1 milligrams of carbon fixed per square meter. The lake was, however, uncharacteristic of oligotrophic lakes in that a severe dissolved-oxygen deficit developed within the hypolimnion during summer stratification and under winter ice cover. The summer dissolved-oxygen deficit resulted from the combination of strong and persistent thermal stratification, which developed within 1 week of the melting of the lake's ice cover in May, and the failure of the spring circulation to fully reaerate the hypolimnion. The autumn circulation did reaerate the entire water column, but the ensuing 6 months of ice and snow cover prevented atmospheric reaeration of the water column and led to development of the winter dissolved-oxygen deficit. The anoxic conditions that eventually developed near the lake bottom allowed the release of nutrients from the bottom sediments and facilitated ammonification reactions. These processes yielded hypolimnetic concentrations of nitrogen and phosphorus compounds, which were much larger than the oligotrophic concentrations measured

  18. Detection and Characterisation of Meteors as a Big Data Citizen Science project

    NASA Astrophysics Data System (ADS)

    Gritsevich, M.

    2017-12-01

    Out of a total around 50,000 meteorites currently known to science, the atmospheric passage was recorded instrumentally in only 30 cases with the potential to derive their atmospheric trajectories and pre-impact heliocentric orbits. Similarly, while the observations of meteors, add thousands of new entries per month to existing databases, it is extremely rare they lead to meteorite recovery. Meteor studies thus represent an excellent example of the Big Data citizen science project, where progress in the field largely depends on the prompt identification and characterisation of meteor events as well as on extensive and valuable contributions by amateur observers. Over the last couple of decades technological advancements in observational techniques have yielded drastic improvements in the quality, quantity and diversity of meteor data, while even more ambitious instruments are about to become operational. This empowers meteor science to boost its experimental and theoretical horizons and seek more advanced scientific goals. We review some of the developments that push meteor science into the Big Data era that requires more complex methodological approaches through interdisciplinary collaborations with other branches of physics and computer science. We argue that meteor science should become an integral part of large surveys in astronomy, aeronomy and space physics, and tackle the complexity of micro-physics of meteor plasma and its interaction with the atmosphere. The recent increased interest in meteor science triggered by the Chelyabinsk fireball helps in building the case for technologically and logistically more ambitious meteor projects. This requires developing new methodological approaches in meteor research, with Big Data science and close collaboration between citizen science, geoscience and astronomy as critical elements. We discuss possibilities for improvements and promote an opportunity for collaboration in meteor science within the currently

  19. Big data and biomedical informatics: a challenging opportunity.

    PubMed

    Bellazzi, R

    2014-05-22

    Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations.

  20. Think Big, Bigger ... and Smaller

    ERIC Educational Resources Information Center

    Nisbett, Richard E.

    2010-01-01

    One important principle of social psychology, writes Nisbett, is that some big-seeming interventions have little or no effect. This article discusses a number of cases from the field of education that confirm this principle. For example, Head Start seems like a big intervention, but research has indicated that its effects on academic achievement…

  1. Personality and job performance: the Big Five revisited.

    PubMed

    Hurtz, G M; Donovan, J J

    2000-12-01

    Prior meta-analyses investigating the relation between the Big 5 personality dimensions and job performance have all contained a threat to construct validity, in that much of the data included within these analyses was not derived from actual Big 5 measures. In addition, these reviews did not address the relations between the Big 5 and contextual performance. Therefore, the present study sought to provide a meta-analytic estimate of the criterion-related validity of explicit Big 5 measures for predicting job performance and contextual performance. The results for job performance closely paralleled 2 of the previous meta-analyses, whereas analyses with contextual performance showed more complex relations among the Big 5 and performance. A more critical interpretation of the Big 5-performance relationship is presented, and suggestions for future research aimed at enhancing the validity of personality predictors are provided.

  2. Adding Big Data Analytics to GCSS-MC

    DTIC Science & Technology

    2014-09-30

    TERMS Big Data , Hadoop , MapReduce, GCSS-MC 15. NUMBER OF PAGES 93 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY...10 2.5 Hadoop . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3 The Experiment Design 23 3.1 Why Add a Big Data Element...23 3.2 Adding a Big Data Element to GCSS-MC . . . . . . . . . . . . . . 24 3.3 Building a Hadoop Cluster

  3. Ethics and Epistemology of Big Data.

    PubMed

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian

    2017-12-01

    In this Symposium on the Ethics and Epistemology of Big Data, we present four perspectives on the ways in which the rapid growth in size of research databanks-i.e. their shift into the realm of "big data"-has changed their moral, socio-political, and epistemic status. While there is clearly something different about "big data" databanks, we encourage readers to place the arguments presented in this Symposium in the context of longstanding debates about the ethics, politics, and epistemology of biobank, database, genetic, and epidemiological research.

  4. The challenges of big data.

    PubMed

    Mardis, Elaine R

    2016-05-01

    The largely untapped potential of big data analytics is a feeding frenzy that has been fueled by the production of many next-generation-sequencing-based data sets that are seeking to answer long-held questions about the biology of human diseases. Although these approaches are likely to be a powerful means of revealing new biological insights, there are a number of substantial challenges that currently hamper efforts to harness the power of big data. This Editorial outlines several such challenges as a means of illustrating that the path to big data revelations is paved with perils that the scientific community must overcome to pursue this important quest. © 2016. Published by The Company of Biologists Ltd.

  5. Big³. Editorial.

    PubMed

    Lehmann, C U; Séroussi, B; Jaulent, M-C

    2014-05-22

    To provide an editorial introduction into the 2014 IMIA Yearbook of Medical Informatics with an overview of the content, the new publishing scheme, and upcoming 25th anniversary. A brief overview of the 2014 special topic, Big Data - Smart Health Strategies, and an outline of the novel publishing model is provided in conjunction with a call for proposals to celebrate the 25th anniversary of the Yearbook. 'Big Data' has become the latest buzzword in informatics and promise new approaches and interventions that can improve health, well-being, and quality of life. This edition of the Yearbook acknowledges the fact that we just started to explore the opportunities that 'Big Data' will bring. However, it will become apparent to the reader that its pervasive nature has invaded all aspects of biomedical informatics - some to a higher degree than others. It was our goal to provide a comprehensive view at the state of 'Big Data' today, explore its strengths and weaknesses, as well as its risks, discuss emerging trends, tools, and applications, and stimulate the development of the field through the aggregation of excellent survey papers and working group contributions to the topic. For the first time in history will the IMIA Yearbook be published in an open access online format allowing a broader readership especially in resource poor countries. For the first time, thanks to the online format, will the IMIA Yearbook be published twice in the year, with two different tracks of papers. We anticipate that the important role of the IMIA yearbook will further increase with these changes just in time for its 25th anniversary in 2016.

  6. The Big Read: Case Studies

    ERIC Educational Resources Information Center

    National Endowment for the Arts, 2009

    2009-01-01

    The Big Read evaluation included a series of 35 case studies designed to gather more in-depth information on the program's implementation and impact. The case studies gave readers a valuable first-hand look at The Big Read in context. Both formal and informal interviews, focus groups, attendance at a wide range of events--all showed how…

  7. Seed bank and big sagebrush plant community composition in a range margin for big sagebrush

    USGS Publications Warehouse

    Martyn, Trace E.; Bradford, John B.; Schlaepfer, Daniel R.; Burke, Ingrid C.; Laurenroth, William K.

    2016-01-01

    The potential influence of seed bank composition on range shifts of species due to climate change is unclear. Seed banks can provide a means of both species persistence in an area and local range expansion in the case of increasing habitat suitability, as may occur under future climate change. However, a mismatch between the seed bank and the established plant community may represent an obstacle to persistence and expansion. In big sagebrush (Artemisia tridentata) plant communities in Montana, USA, we compared the seed bank to the established plant community. There was less than a 20% similarity in the relative abundance of species between the established plant community and the seed bank. This difference was primarily driven by an overrepresentation of native annual forbs and an underrepresentation of big sagebrush in the seed bank compared to the established plant community. Even though we expect an increase in habitat suitability for big sagebrush under future climate conditions at our sites, the current mismatch between the plant community and the seed bank could impede big sagebrush range expansion into increasingly suitable habitat in the future.

  8. Application and Prospect of Big Data in Water Resources

    NASA Astrophysics Data System (ADS)

    Xi, Danchi; Xu, Xinyi

    2017-04-01

    Because of developed information technology and affordable data storage, we h ave entered the era of data explosion. The term "Big Data" and technology relate s to it has been created and commonly applied in many fields. However, academic studies just got attention on Big Data application in water resources recently. As a result, water resource Big Data technology has not been fully developed. This paper introduces the concept of Big Data and its key technologies, including the Hadoop system and MapReduce. In addition, this paper focuses on the significance of applying the big data in water resources and summarizing prior researches by others. Most studies in this field only set up theoretical frame, but we define the "Water Big Data" and explain its tridimensional properties which are time dimension, spatial dimension and intelligent dimension. Based on HBase, the classification system of Water Big Data is introduced: hydrology data, ecology data and socio-economic data. Then after analyzing the challenges in water resources management, a series of solutions using Big Data technologies such as data mining and web crawler, are proposed. Finally, the prospect of applying big data in water resources is discussed, it can be predicted that as Big Data technology keeps developing, "3D" (Data Driven Decision) will be utilized more in water resources management in the future.

  9. Toward a Literature-Driven Definition of Big Data in Healthcare.

    PubMed

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    The aim of this study was to provide a definition of big data in healthcare. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. A total of 196 papers were included. Big data can be defined as datasets with Log(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data.

  10. Big Data Analytic, Big Step for Patient Management and Care in Puerto Rico.

    PubMed

    Borrero, Ernesto E

    2018-01-01

    This letter provides an overview of the application of big data in health care system to improve quality of care, including predictive modelling for risk and resource use, precision medicine and clinical decision support, quality of care and performance measurement, public health and research applications, among others. The author delineates the tremendous potential for big data analytics and discuss how it can be successfully implemented in clinical practice, as an important component of a learning health-care system.

  11. Big Data and Biomedical Informatics: A Challenging Opportunity

    PubMed Central

    2014-01-01

    Summary Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations. PMID:24853034

  12. Integrating the Apache Big Data Stack with HPC for Big Data

    NASA Astrophysics Data System (ADS)

    Fox, G. C.; Qiu, J.; Jha, S.

    2014-12-01

    There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However, the same is not so true for data intensive computing, even though commercially clouds devote much more resources to data analytics than supercomputers devote to simulations. We look at a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures. We suggest a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks and use these to identify a few key classes of hardware/software architectures. Our analysis builds on combining HPC and ABDS the Apache big data software stack that is well used in modern cloud computing. Initial results on clouds and HPC systems are encouraging. We propose the development of SPIDAL - Scalable Parallel Interoperable Data Analytics Library -- built on system aand data abstractions suggested by the HPC-ABDS architecture. We discuss how it can be used in several application areas including Polar Science.

  13. Issues in Big-Data Database Systems

    DTIC Science & Technology

    2014-06-01

    Post, 18 August 2013. Berman, Jules K. (2013). Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information. New York: Elsevier... Jules K. (2013). Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information. New York: Elsevier. 261pp. Characterization of

  14. WE-H-BRB-00: Big Data in Radiation Oncology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at themore » NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.« less

  15. Toward a Literature-Driven Definition of Big Data in Healthcare

    PubMed Central

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data. PMID:26137488

  16. Big-Eyed Bugs Have Big Appetite for Pests

    USDA-ARS?s Scientific Manuscript database

    Many kinds of arthropod natural enemies (predators and parasitoids) inhabit crop fields in Arizona and can have a large negative impact on several pest insect species that also infest these crops. Geocoris spp., commonly known as big-eyed bugs, are among the most abundant insect predators in field c...

  17. Big Data - What is it and why it matters.

    PubMed

    Tattersall, Andy; Grant, Maria J

    2016-06-01

    Big data, like MOOCs, altmetrics and open access, is a term that has been commonplace in the library community for some time yet, despite its prevalence, many in the library and information sector remain unsure of the relationship between big data and their roles. This editorial explores what big data could mean for the day-to-day practice of health library and information workers, presenting examples of big data in action, considering the ethics of accessing big data sets and the potential for new roles for library and information workers. © 2016 Health Libraries Group.

  18. [Yield of starch extraction from plantain (Musa paradisiaca). Pilot plant study].

    PubMed

    Flores-Gorosquera, Emigdia; García-Suárez, Francisco J; Flores-Huicochea, Emmanuel; Núñez-Santiago, María C; González-Soto, Rosalia A; Bello-Pérez, Luis A

    2004-01-01

    In México, the banana (Musa paradisiaca) is cooked (boiling or deep frying) before being eaten, but the consumption is not very popular and a big quantity of the product is lost after harvesting. The unripe plantain has a high level of starch and due to this the use of banana can be diversified as raw material for starch isolation. The objective of this work was to study the starch yield at pilot plant scale. Experiments at laboratory scale were carried out using the pulp with citric acid to 0,3 % (antioxidant), in order to evaluate the different unitary operations of the process. The starch yield, based on starch presence in the pulp that can be isolated, were between 76 and 86 %, and the values at pilot plant scale were between 63 and 71 %, in different lots of banana fruit. Starch yield values were similar among the diverse lots, showing that the process is reproducible. The lower values of starch recovery at pilot plant scale are due to the loss during sieving operations; however, the amount of starch recovery is good.

  19. Research on information security in big data era

    NASA Astrophysics Data System (ADS)

    Zhou, Linqi; Gu, Weihong; Huang, Cheng; Huang, Aijun; Bai, Yongbin

    2018-05-01

    Big data is becoming another hotspot in the field of information technology after the cloud computing and the Internet of Things. However, the existing information security methods can no longer meet the information security requirements in the era of big data. This paper analyzes the challenges and a cause of data security brought by big data, discusses the development trend of network attacks under the background of big data, and puts forward my own opinions on the development of security defense in technology, strategy and product.

  20. ["Big data" - large data, a lot of knowledge?].

    PubMed

    Hothorn, Torsten

    2015-01-28

    Since a couple of years, the term Big Data describes technologies to extract knowledge from data. Applications of Big Data and their consequences are also increasingly discussed in the mass media. Because medicine is an empirical science, we discuss the meaning of Big Data and its potential for future medical research.

  1. Big Ideas in Primary Mathematics: Issues and Directions

    ERIC Educational Resources Information Center

    Askew, Mike

    2013-01-01

    This article is located within the literature arguing for attention to Big Ideas in teaching and learning mathematics for understanding. The focus is on surveying the literature of Big Ideas and clarifying what might constitute Big Ideas in the primary Mathematics Curriculum based on both theoretical and pragmatic considerations. This is…

  2. Big Data - Smart Health Strategies

    PubMed Central

    2014-01-01

    Summary Objectives To select best papers published in 2013 in the field of big data and smart health strategies, and summarize outstanding research efforts. Methods A systematic search was performed using two major bibliographic databases for relevant journal papers. The references obtained were reviewed in a two-stage process, starting with a blinded review performed by the two section editors, and followed by a peer review process operated by external reviewers recognized as experts in the field. Results The complete review process selected four best papers, illustrating various aspects of the special theme, among them: (a) using large volumes of unstructured data and, specifically, clinical notes from Electronic Health Records (EHRs) for pharmacovigilance; (b) knowledge discovery via querying large volumes of complex (both structured and unstructured) biological data using big data technologies and relevant tools; (c) methodologies for applying cloud computing and big data technologies in the field of genomics, and (d) system architectures enabling high-performance access to and processing of large datasets extracted from EHRs. Conclusions The potential of big data in biomedicine has been pinpointed in various viewpoint papers and editorials. The review of current scientific literature illustrated a variety of interesting methods and applications in the field, but still the promises exceed the current outcomes. As we are getting closer towards a solid foundation with respect to common understanding of relevant concepts and technical aspects, and the use of standardized technologies and tools, we can anticipate to reach the potential that big data offer for personalized medicine and smart health strategies in the near future. PMID:25123721

  3. Big Data Management in US Hospitals: Benefits and Barriers.

    PubMed

    Schaeffer, Chad; Booton, Lawrence; Halleck, Jamey; Studeny, Jana; Coustasse, Alberto

    Big data has been considered as an effective tool for reducing health care costs by eliminating adverse events and reducing readmissions to hospitals. The purposes of this study were to examine the emergence of big data in the US health care industry, to evaluate a hospital's ability to effectively use complex information, and to predict the potential benefits that hospitals might realize if they are successful in using big data. The findings of the research suggest that there were a number of benefits expected by hospitals when using big data analytics, including cost savings and business intelligence. By using big data, many hospitals have recognized that there have been challenges, including lack of experience and cost of developing the analytics. Many hospitals will need to invest in the acquiring of adequate personnel with experience in big data analytics and data integration. The findings of this study suggest that the adoption, implementation, and utilization of big data technology will have a profound positive effect among health care providers.

  4. Big Data in Caenorhabditis elegans: quo vadis?

    PubMed Central

    Hutter, Harald; Moerman, Donald

    2015-01-01

    A clear definition of what constitutes “Big Data” is difficult to identify, but we find it most useful to define Big Data as a data collection that is complete. By this criterion, researchers on Caenorhabditis elegans have a long history of collecting Big Data, since the organism was selected with the idea of obtaining a complete biological description and understanding of development. The complete wiring diagram of the nervous system, the complete cell lineage, and the complete genome sequence provide a framework to phrase and test hypotheses. Given this history, it might be surprising that the number of “complete” data sets for this organism is actually rather small—not because of lack of effort, but because most types of biological experiments are not currently amenable to complete large-scale data collection. Many are also not inherently limited, so that it becomes difficult to even define completeness. At present, we only have partial data on mutated genes and their phenotypes, gene expression, and protein–protein interaction—important data for many biological questions. Big Data can point toward unexpected correlations, and these unexpected correlations can lead to novel investigations; however, Big Data cannot establish causation. As a result, there is much excitement about Big Data, but there is also a discussion on just what Big Data contributes to solving a biological problem. Because of its relative simplicity, C. elegans is an ideal test bed to explore this issue and at the same time determine what is necessary to build a multicellular organism from a single cell. PMID:26543198

  5. [Relevance of big data for molecular diagnostics].

    PubMed

    Bonin-Andresen, M; Smiljanovic, B; Stuhlmüller, B; Sörensen, T; Grützkau, A; Häupl, T

    2018-04-01

    Big data analysis raises the expectation that computerized algorithms may extract new knowledge from otherwise unmanageable vast data sets. What are the algorithms behind the big data discussion? In principle, high throughput technologies in molecular research already introduced big data and the development and application of analysis tools into the field of rheumatology some 15 years ago. This includes especially omics technologies, such as genomics, transcriptomics and cytomics. Some basic methods of data analysis are provided along with the technology, however, functional analysis and interpretation requires adaptation of existing or development of new software tools. For these steps, structuring and evaluating according to the biological context is extremely important and not only a mathematical problem. This aspect has to be considered much more for molecular big data than for those analyzed in health economy or epidemiology. Molecular data are structured in a first order determined by the applied technology and present quantitative characteristics that follow the principles of their biological nature. These biological dependencies have to be integrated into software solutions, which may require networks of molecular big data of the same or even different technologies in order to achieve cross-technology confirmation. More and more extensive recording of molecular processes also in individual patients are generating personal big data and require new strategies for management in order to develop data-driven individualized interpretation concepts. With this perspective in mind, translation of information derived from molecular big data will also require new specifications for education and professional competence.

  6. 'Big data' in pharmaceutical science: challenges and opportunities.

    PubMed

    Dossetter, Al G; Ecker, Gerhard; Laverty, Hugh; Overington, John

    2014-05-01

    Future Medicinal Chemistry invited a selection of experts to express their views on the current impact of big data in drug discovery and design, as well as speculate on future developments in the field. The topics discussed include the challenges of implementing big data technologies, maintaining the quality and privacy of data sets, and how the industry will need to adapt to welcome the big data era. Their enlightening responses provide a snapshot of the many and varied contributions being made by big data to the advancement of pharmaceutical science.

  7. Sports and the Big6: The Information Advantage.

    ERIC Educational Resources Information Center

    Eisenberg, Mike

    1997-01-01

    Explores the connection between sports and the Big6 information problem-solving process and how sports provides an ideal setting for learning and teaching about the Big6. Topics include information aspects of baseball, football, soccer, basketball, figure skating, track and field, and golf; and the Big6 process applied to sports. (LRW)

  8. Current applications of big data in obstetric anesthesiology.

    PubMed

    Klumpner, Thomas T; Bauer, Melissa E; Kheterpal, Sachin

    2017-06-01

    The narrative review aims to highlight several recently published 'big data' studies pertinent to the field of obstetric anesthesiology. Big data has been used to study rare outcomes, to identify trends within the healthcare system, to identify variations in practice patterns, and to highlight potential inequalities in obstetric anesthesia care. Big data studies have helped define the risk of rare complications of obstetric anesthesia, such as the risk of neuraxial hematoma in thrombocytopenic parturients. Also, large national databases have been used to better understand trends in anesthesia-related adverse events during cesarean delivery as well as outline potential racial/ethnic disparities in obstetric anesthesia care. Finally, real-time analysis of patient data across a number of disparate health information systems through the use of sophisticated clinical decision support and surveillance systems is one promising application of big data technology on the labor and delivery unit. 'Big data' research has important implications for obstetric anesthesia care and warrants continued study. Real-time electronic surveillance is a potentially useful application of big data technology on the labor and delivery unit.

  9. [Big data and their perspectives in radiation therapy].

    PubMed

    Guihard, Sébastien; Thariat, Juliette; Clavier, Jean-Baptiste

    2017-02-01

    The concept of big data indicates a change of scale in the use of data and data aggregation into large databases through improved computer technology. One of the current challenges in the creation of big data in the context of radiation therapy is the transformation of routine care items into dark data, i.e. data not yet collected, and the fusion of databases collecting different types of information (dose-volume histograms and toxicity data for example). Processes and infrastructures devoted to big data collection should not impact negatively on the doctor-patient relationship, the general process of care or the quality of the data collected. The use of big data requires a collective effort of physicians, physicists, software manufacturers and health authorities to create, organize and exploit big data in radiotherapy and, beyond, oncology. Big data involve a new culture to build an appropriate infrastructure legally and ethically. Processes and issues are discussed in this article. Copyright © 2016 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  10. Volume and Value of Big Healthcare Data.

    PubMed

    Dinov, Ivo D

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions.

  11. Volume and Value of Big Healthcare Data

    PubMed Central

    Dinov, Ivo D.

    2016-01-01

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions. PMID:26998309

  12. Simulation Experiments: Better Data, Not Just Big Data

    DTIC Science & Technology

    2014-12-01

    Modeling and Computer Simulation 22 (4): 20:1–20:17. Hogan, Joe 2014, June 9. “So Far, Big Data is Small Potatoes ”. Scientific American Blog Network...Available via http://blogs.scientificamerican.com/cross-check/2014/06/09/so-far- big-data-is-small- potatoes /. IBM. 2014. “Big Data at the Speed of Business

  13. Big Data Analytics Methodology in the Financial Industry

    ERIC Educational Resources Information Center

    Lawler, James; Joseph, Anthony

    2017-01-01

    Firms in industry continue to be attracted by the benefits of Big Data Analytics. The benefits of Big Data Analytics projects may not be as evident as frequently indicated in the literature. The authors of the study evaluate factors in a customized methodology that may increase the benefits of Big Data Analytics projects. Evaluating firms in the…

  14. Big data: survey, technologies, opportunities, and challenges.

    PubMed

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Ali, Waleed Kamaleldin Mahmoud; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  15. Big Data: Survey, Technologies, Opportunities, and Challenges

    PubMed Central

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Mahmoud Ali, Waleed Kamaleldin; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data. PMID:25136682

  16. Opportunity and Challenges for Migrating Big Data Analytics in Cloud

    NASA Astrophysics Data System (ADS)

    Amitkumar Manekar, S.; Pradeepini, G., Dr.

    2017-08-01

    Big Data Analytics is a big word now days. As per demanding and more scalable process data generation capabilities, data acquisition and storage become a crucial issue. Cloud storage is a majorly usable platform; the technology will become crucial to executives handling data powered by analytics. Now a day’s trend towards “big data-as-a-service” is talked everywhere. On one hand, cloud-based big data analytics exactly tackle in progress issues of scale, speed, and cost. But researchers working to solve security and other real-time problem of big data migration on cloud based platform. This article specially focused on finding possible ways to migrate big data to cloud. Technology which support coherent data migration and possibility of doing big data analytics on cloud platform is demanding in natute for new era of growth. This article also gives information about available technology and techniques for migration of big data in cloud.

  17. Curating Big Data Made Simple: Perspectives from Scientific Communities.

    PubMed

    Sowe, Sulayman K; Zettsu, Koji

    2014-03-01

    The digital universe is exponentially producing an unprecedented volume of data that has brought benefits as well as fundamental challenges for enterprises and scientific communities alike. This trend is inherently exciting for the development and deployment of cloud platforms to support scientific communities curating big data. The excitement stems from the fact that scientists can now access and extract value from the big data corpus, establish relationships between bits and pieces of information from many types of data, and collaborate with a diverse community of researchers from various domains. However, despite these perceived benefits, to date, little attention is focused on the people or communities who are both beneficiaries and, at the same time, producers of big data. The technical challenges posed by big data are as big as understanding the dynamics of communities working with big data, whether scientific or otherwise. Furthermore, the big data era also means that big data platforms for data-intensive research must be designed in such a way that research scientists can easily search and find data for their research, upload and download datasets for onsite/offsite use, perform computations and analysis, share their findings and research experience, and seamlessly collaborate with their colleagues. In this article, we present the architecture and design of a cloud platform that meets some of these requirements, and a big data curation model that describes how a community of earth and environmental scientists is using the platform to curate data. Motivation for developing the platform, lessons learnt in overcoming some challenges associated with supporting scientists to curate big data, and future research directions are also presented.

  18. Big data analytics in healthcare: promise and potential.

    PubMed

    Raghupathi, Wullianallur; Raghupathi, Viju

    2014-01-01

    To describe the promise and potential of big data analytics in healthcare. The paper describes the nascent field of big data analytics in healthcare, discusses the benefits, outlines an architectural framework and methodology, describes examples reported in the literature, briefly discusses the challenges, and offers conclusions. The paper provides a broad overview of big data analytics for healthcare researchers and practitioners. Big data analytics in healthcare is evolving into a promising field for providing insight from very large data sets and improving outcomes while reducing costs. Its potential is great; however there remain challenges to overcome.

  19. Big data are coming to psychiatry: a general introduction.

    PubMed

    Monteith, Scott; Glenn, Tasha; Geddes, John; Bauer, Michael

    2015-12-01

    Big data are coming to the study of bipolar disorder and all of psychiatry. Data are coming from providers and payers (including EMR, imaging, insurance claims and pharmacy data), from omics (genomic, proteomic, and metabolomic data), and from patients and non-providers (data from smart phone and Internet activities, sensors and monitoring tools). Analysis of the big data will provide unprecedented opportunities for exploration, descriptive observation, hypothesis generation, and prediction, and the results of big data studies will be incorporated into clinical practice. Technical challenges remain in the quality, analysis and management of big data. This paper discusses some of the fundamental opportunities and challenges of big data for psychiatry.

  20. True Randomness from Big Data.

    PubMed

    Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang

    2016-09-26

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  1. True Randomness from Big Data

    NASA Astrophysics Data System (ADS)

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-09-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  2. True Randomness from Big Data

    PubMed Central

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-01-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514

  3. Big Data, Big Problems: Incorporating Mission, Values, and Culture in Provider Affiliations.

    PubMed

    Shaha, Steven H; Sayeed, Zain; Anoushiravani, Afshin A; El-Othmani, Mouhanad M; Saleh, Khaled J

    2016-10-01

    This article explores how integration of data from clinical registries and electronic health records produces a quality impact within orthopedic practices. Data are differentiated from information, and several types of data that are collected and used in orthopedic outcome measurement are defined. Furthermore, the concept of comparative effectiveness and its impact on orthopedic clinical research are assessed. This article places emphasis on how the concept of big data produces health care challenges balanced with benefits that may be faced by patients and orthopedic surgeons. Finally, essential characteristics of an electronic health record that interlinks musculoskeletal care and big data initiatives are reviewed. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. AirMSPI PODEX BigSur Terrain Images

    Atmospheric Science Data Center

    2013-12-13

    ... Browse Images from the PODEX 2013 Campaign   Big Sur target (Big Sur, California) 02/03/2013 Terrain-projected   Select ...   Version number   For more information, see the Data Product Specifications (DPS)   ...

  5. A New Look at Big History

    ERIC Educational Resources Information Center

    Hawkey, Kate

    2014-01-01

    The article sets out a "big history" which resonates with the priorities of our own time. A globalizing world calls for new spacial scales to underpin what the history curriculum addresses, "big history" calls for new temporal scales, while concern over climate change calls for a new look at subject boundaries. The article…

  6. West Virginia's big trees: setting the record straight

    Treesearch

    Melissa Thomas-Van Gundy; Robert Whetsell

    2016-01-01

    People love big trees, people love to find big trees, and people love to find big trees in the place they call home. Having been suspicious for years, my coauthor and historian Rob Whetsell, approached me with a species identification challenge. There are several photographs of giant trees used by many people to illustrate the past forests of West Virginia,...

  7. 77 FR 49779 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-17

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... Big Horn County Weed and Pest Building, 4782 Highway 310, Greybull, Wyoming. Written comments about...

  8. 75 FR 71069 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... held at the Big Horn County Weed and Pest Building, 4782 Highway 310, Greybull, Wyoming. Written...

  9. Big Bang Day : The Great Big Particle Adventure - 3. Origins

    ScienceCinema

    None

    2017-12-09

    In this series, comedian and physicist Ben Miller asks the CERN scientists what they hope to find. If the LHC is successful, it will explain the nature of the Universe around us in terms of a few simple ingredients and a few simple rules. But the Universe now was forged in a Big Bang where conditions were very different, and the rules were very different, and those early moments were crucial to determining how things turned out later. At the LHC they can recreate conditions as they were billionths of a second after the Big Bang, before atoms and nuclei existed. They can find out why matter and antimatter didn't mutually annihilate each other to leave behind a Universe of pure, brilliant light. And they can look into the very structure of space and time - the fabric of the Universe

  10. Effect of Symmetry on Performance of Imploding Capsules using the Big Foot Design

    NASA Astrophysics Data System (ADS)

    Khan, Shahab; Casey, Daniel; Baker, Kevin; Thomas, Cliff; Nora, Ryan; Spears, Brian; Benedetti, Laura; Izumi, Nobuhiko; Ma, Tammy; Nagel, Sabrina; Pak, Arthur; National Ignition Facility Collaboration

    2017-10-01

    At the National Ignition Facility, several simultaneous designs are investigated for optimizing Inertial Confinement Fusion (ICF) energy gain of indirectly driven imploding fuel capsules. Relatively high neutron yield has been achieved while exhibiting a non-symmetric central core and/or shell. While developing the ``Big Foot'' design, several tuning steps were undertaken to minimize the asymmetry of both the central hot core as well as the shell. Surrogate capsules (symcaps) were utilized in the 2-D Radiography platform to assess both the shell and central core symmetry. The results of the tuning experiments are presented. In addition, a comparison of performance and shape metrics demonstrates that improving symmetry of the implosion can yield better performance. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-683471.

  11. Structuring the Curriculum around Big Ideas

    ERIC Educational Resources Information Center

    Alleman, Janet; Knighton, Barbara; Brophy, Jere

    2010-01-01

    This article provides an inside look at Barbara Knighton's classroom teaching. She uses big ideas to guide her planning and instruction and gives other teachers suggestions for adopting the big idea approach and ways for making the approach easier. This article also represents a "small slice" of a dozen years of collaborative research,…

  12. Toward a manifesto for the 'public understanding of big data'.

    PubMed

    Michael, Mike; Lupton, Deborah

    2016-01-01

    In this article, we sketch a 'manifesto' for the 'public understanding of big data'. On the one hand, this entails such public understanding of science and public engagement with science and technology-tinged questions as follows: How, when and where are people exposed to, or do they engage with, big data? Who are regarded as big data's trustworthy sources, or credible commentators and critics? What are the mechanisms by which big data systems are opened to public scrutiny? On the other hand, big data generate many challenges for public understanding of science and public engagement with science and technology: How do we address publics that are simultaneously the informant, the informed and the information of big data? What counts as understanding of, or engagement with, big data, when big data themselves are multiplying, fluid and recursive? As part of our manifesto, we propose a range of empirical, conceptual and methodological exhortations. We also provide Appendix 1 that outlines three novel methods for addressing some of the issues raised in the article. © The Author(s) 2015.

  13. Phase feeding in a big-bird production scenario: effect on growth performance, yield, and fillet dimension.

    PubMed

    Brewer, V B; Owens, C M; Emmert, J L

    2012-05-01

    Phase feeding (PF) has been effective at maintaining broiler growth while reducing production cost, but the effect on different broiler strains and sex has not been assessed. An experiment was conducted using 4 commercial broiler strains grown up to 63 d of age (n = 1,440), comparing a PF approach to an industry-type diet. At d 17, birds began either the industry or PF regimen. The industry regimen consisted of average industry nutrient levels with periods from 17 to 32 d, 32 to 40 d, 40 to 49 d, and 49 d to the end of trial. For PF, diets were prepared that contained Lys, sulfur amino acids, and Thr levels matching the predicted requirements for birds at the beginning (high nutrient density) and end (low nutrient density) of PF. Pelleted high and low nutrient density diets were blended to produce rations containing amino acid levels that matched the predicted PF requirements over 2-d intervals. Weight gain, feed intake, and feed efficiency were calculated through d 58. Birds were commercially processed at 59, 61, or 63 d; yield and fillet dimensions were measured. Phase feeding did not effect weight gain or feed intake of broilers during the overall growth period (17-58 d). For most strains, PF did not effect final BW, yield, or fillet dimensions. However, strain and sex had greater effects on growth performance, yields, and fillet dimensions. Strains B and D had greater breast yield than strains A and C. Reduced feed costs ($0.01 to $0.04 per kilogram of gain, depending on strain) were observed for all strains with PF for the overall growth period (17-58 d). Therefore, potential savings on feed costs are possible for all strains used in this study with the incorporation of the PF regimen.

  14. Big Data and SME financing in China

    NASA Astrophysics Data System (ADS)

    Tian, Z.; Hassan, A. F. S.; Razak, N. H. A.

    2018-05-01

    Big Data is becoming more and more prevalent in recent years, and it attracts lots of attention from various perspectives of the world such as academia, industry, and even government. Big Data can be seen as the next-generation source of power for the economy. Today, Big Data represents a new way to approach information and help all industry and business fields. The Chinese financial market has long been dominated by state-owned banks; however, these banks provide low-efficiency help toward small- and medium-sized enterprises (SMEs) and private businesses. The development of Big Data is changing the financial market, with more and more financial products and services provided by Internet companies in China. The credit rating models and borrower identification make online financial services more efficient than conventional banks. These services also challenge the domination of state-owned banks.

  15. An embedding for the big bang

    NASA Technical Reports Server (NTRS)

    Wesson, Paul S.

    1994-01-01

    A cosmological model is given that has good physical properties for the early and late universe but is a hypersurface in a flat five-dimensional manifold. The big bang can therefore be regarded as an effect of a choice of coordinates in a truncated higher-dimensional geometry. Thus the big bang is in some sense a geometrical illusion.

  16. 76 FR 26240 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-06

    ... words Big Horn County RAC in the subject line. Facsimilies may be sent to 307-674-2668. All comments... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee...

  17. Big Data and the Future of Radiology Informatics.

    PubMed

    Kansagra, Akash P; Yu, John-Paul J; Chatterjee, Arindam R; Lenchik, Leon; Chow, Daniel S; Prater, Adam B; Yeh, Jean; Doshi, Ankur M; Hawkins, C Matthew; Heilbrun, Marta E; Smith, Stacy E; Oselkin, Martin; Gupta, Pushpender; Ali, Sayed

    2016-01-01

    Rapid growth in the amount of data that is electronically recorded as part of routine clinical operations has generated great interest in the use of Big Data methodologies to address clinical and research questions. These methods can efficiently analyze and deliver insights from high-volume, high-variety, and high-growth rate datasets generated across the continuum of care, thereby forgoing the time, cost, and effort of more focused and controlled hypothesis-driven research. By virtue of an existing robust information technology infrastructure and years of archived digital data, radiology departments are particularly well positioned to take advantage of emerging Big Data techniques. In this review, we describe four areas in which Big Data is poised to have an immediate impact on radiology practice, research, and operations. In addition, we provide an overview of the Big Data adoption cycle and describe how academic radiology departments can promote Big Data development. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  18. Natural regeneration processes in big sagebrush (Artemisia tridentata)

    USGS Publications Warehouse

    Schlaepfer, Daniel R.; Lauenroth, William K.; Bradford, John B.

    2014-01-01

    Big sagebrush, Artemisia tridentata Nuttall (Asteraceae), is the dominant plant species of large portions of semiarid western North America. However, much of historical big sagebrush vegetation has been removed or modified. Thus, regeneration is recognized as an important component for land management. Limited knowledge about key regeneration processes, however, represents an obstacle to identifying successful management practices and to gaining greater insight into the consequences of increasing disturbance frequency and global change. Therefore, our objective is to synthesize knowledge about natural big sagebrush regeneration. We identified and characterized the controls of big sagebrush seed production, germination, and establishment. The largest knowledge gaps and associated research needs include quiescence and dormancy of embryos and seedlings; variation in seed production and germination percentages; wet-thermal time model of germination; responses to frost events (including freezing/thawing of soils), CO2 concentration, and nutrients in combination with water availability; suitability of microsite vs. site conditions; competitive ability as well as seedling growth responses; and differences among subspecies and ecoregions. Potential impacts of climate change on big sagebrush regeneration could include that temperature increases may not have a large direct influence on regeneration due to the broad temperature optimum for regeneration, whereas indirect effects could include selection for populations with less stringent seed dormancy. Drier conditions will have direct negative effects on germination and seedling survival and could also lead to lighter seeds, which lowers germination success further. The short seed dispersal distance of big sagebrush may limit its tracking of suitable climate; whereas, the low competitive ability of big sagebrush seedlings may limit successful competition with species that track climate. An improved understanding of the

  19. Big Data Provenance: Challenges, State of the Art and Opportunities.

    PubMed

    Wang, Jianwu; Crawl, Daniel; Purawat, Shweta; Nguyen, Mai; Altintas, Ilkay

    2015-01-01

    Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data.

  20. Big Data Provenance: Challenges, State of the Art and Opportunities

    PubMed Central

    Wang, Jianwu; Crawl, Daniel; Purawat, Shweta; Nguyen, Mai; Altintas, Ilkay

    2017-01-01

    Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data. PMID:29399671

  1. 1976 Big Thompson flood, Colorado

    USGS Publications Warehouse

    Jarrett, R. D.; Vandas, S.J.

    2006-01-01

    In the early evening of July 31, 1976, a large stationary thunderstorm released as much as 7.5 inches of rainfall in about an hour (about 12 inches in a few hours) in the upper reaches of the Big Thompson River drainage. This large amount of rainfall in such a short period of time produced a flash flood that caught residents and tourists by surprise. The immense volume of water that churned down the narrow Big Thompson Canyon scoured the river channel and destroyed everything in its path, including 418 homes, 52 businesses, numerous bridges, paved and unpaved roads, power and telephone lines, and many other structures. The tragedy claimed the lives of 144 people. Scores of other people narrowly escaped with their lives. The Big Thompson flood ranks among the deadliest of Colorado's recorded floods. It is one of several destructive floods in the United States that has shown the necessity of conducting research to determine the causes and effects of floods. The U.S. Geological Survey (USGS) conducts research and operates a Nationwide streamgage network to help understand and predict the magnitude and likelihood of large streamflow events such as the Big Thompson Flood. Such research and streamgage information are part of an ongoing USGS effort to reduce flood hazards and to increase public awareness.

  2. Infrared Observations with the 1.6 Meter New Solar Telescope in Big Bear: Origins of Space Weather

    DTIC Science & Technology

    2015-05-21

    with the NST came in the Summer of 2009, while the first observations corrected by adaptive optics (AO) came in the Summer of 2010 and first vector...magnetograms (VMGs) in the Summer of 2011. In 2012, a new generation of solar adaptive optics (AO) developed in Big Bear led to hitherto only...upon which the NST has yield key information. Our concentration of sunspots in the second year of funding arises because of the improved resolution

  3. [Embracing medical innovation in the era of big data].

    PubMed

    You, Suning

    2015-01-01

    Along with the advent of big data era worldwide, medical field has to place itself in it inevitably. The current article thoroughly introduces the basic knowledge of big data, and points out the coexistence of its advantages and disadvantages. Although the innovations in medical field are struggling, the current medical pattern will be changed fundamentally by big data. The article also shows quick change of relevant analysis in big data era, depicts a good intention of digital medical, and proposes some wise advices to surgeons.

  4. Application and Exploration of Big Data Mining in Clinical Medicine.

    PubMed

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-03-20

    To review theories and technologies of big data mining and their application in clinical medicine. Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster-Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Big data mining has the potential to play an important role in clinical medicine.

  5. Big Data in Public Health: Terminology, Machine Learning, and Privacy.

    PubMed

    Mooney, Stephen J; Pejaver, Vikas

    2018-04-01

    The digital world is generating data at a staggering and still increasing rate. While these "big data" have unlocked novel opportunities to understand public health, they hold still greater potential for research and practice. This review explores several key issues that have arisen around big data. First, we propose a taxonomy of sources of big data to clarify terminology and identify threads common across some subtypes of big data. Next, we consider common public health research and practice uses for big data, including surveillance, hypothesis-generating research, and causal inference, while exploring the role that machine learning may play in each use. We then consider the ethical implications of the big data revolution with particular emphasis on maintaining appropriate care for privacy in a world in which technology is rapidly changing social norms regarding the need for (and even the meaning of) privacy. Finally, we make suggestions regarding structuring teams and training to succeed in working with big data in research and practice.

  6. Big data analytics to improve cardiovascular care: promise and challenges.

    PubMed

    Rumsfeld, John S; Joynt, Karen E; Maddox, Thomas M

    2016-06-01

    The potential for big data analytics to improve cardiovascular quality of care and patient outcomes is tremendous. However, the application of big data in health care is at a nascent stage, and the evidence to date demonstrating that big data analytics will improve care and outcomes is scant. This Review provides an overview of the data sources and methods that comprise big data analytics, and describes eight areas of application of big data analytics to improve cardiovascular care, including predictive modelling for risk and resource use, population management, drug and medical device safety surveillance, disease and treatment heterogeneity, precision medicine and clinical decision support, quality of care and performance measurement, and public health and research applications. We also delineate the important challenges for big data applications in cardiovascular care, including the need for evidence of effectiveness and safety, the methodological issues such as data quality and validation, and the critical importance of clinical integration and proof of clinical utility. If big data analytics are shown to improve quality of care and patient outcomes, and can be successfully implemented in cardiovascular practice, big data will fulfil its potential as an important component of a learning health-care system.

  7. A proposed framework of big data readiness in public sectors

    NASA Astrophysics Data System (ADS)

    Ali, Raja Haslinda Raja Mohd; Mohamad, Rosli; Sudin, Suhizaz

    2016-08-01

    Growing interest over big data mainly linked to its great potential to unveil unforeseen pattern or profiles that support organisation's key business decisions. Following private sector moves to embrace big data, the government sector has now getting into the bandwagon. Big data has been considered as one of the potential tools to enhance service delivery of the public sector within its financial resources constraints. Malaysian government, particularly, has considered big data as one of the main national agenda. Regardless of government commitment to promote big data amongst government agencies, degrees of readiness of the government agencies as well as their employees are crucial in ensuring successful deployment of big data. This paper, therefore, proposes a conceptual framework to investigate perceived readiness of big data potentials amongst Malaysian government agencies. Perceived readiness of 28 ministries and their respective employees will be assessed using both qualitative (interview) and quantitative (survey) approaches. The outcome of the study is expected to offer meaningful insight on factors affecting change readiness among public agencies on big data potentials and the expected outcome from greater/lower change readiness among the public sectors.

  8. Big Bang Day : The Great Big Particle Adventure - 3. Origins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    In this series, comedian and physicist Ben Miller asks the CERN scientists what they hope to find. If the LHC is successful, it will explain the nature of the Universe around us in terms of a few simple ingredients and a few simple rules. But the Universe now was forged in a Big Bang where conditions were very different, and the rules were very different, and those early moments were crucial to determining how things turned out later. At the LHC they can recreate conditions as they were billionths of a second after the Big Bang, before atoms and nucleimore » existed. They can find out why matter and antimatter didn't mutually annihilate each other to leave behind a Universe of pure, brilliant light. And they can look into the very structure of space and time - the fabric of the Universe« less

  9. 78 FR 33326 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-04

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... will be held July 15, 2013 at 3:00 p.m. ADDRESSES: The meeting will be held at Big Horn County Weed and...

  10. 76 FR 7810 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-11

    ... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee... will be held on March 3, 2011, and will begin at 10 a.m. ADDRESSES: The meeting will be held at the Big...

  11. In Search of the Big Bubble

    ERIC Educational Resources Information Center

    Simoson, Andrew; Wentzky, Bethany

    2011-01-01

    Freely rising air bubbles in water sometimes assume the shape of a spherical cap, a shape also known as the "big bubble". Is it possible to find some objective function involving a combination of a bubble's attributes for which the big bubble is the optimal shape? Following the basic idea of the definite integral, we define a bubble's surface as…

  12. Concurrence of big data analytics and healthcare: A systematic review.

    PubMed

    Mehta, Nishita; Pandit, Anil

    2018-06-01

    The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of

  13. Big Data Analytics in Healthcare

    PubMed Central

    Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S. M. Reza; Beard, Daniel A.

    2015-01-01

    The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined. PMID:26229957

  14. Big Data Analytics in Healthcare.

    PubMed

    Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S M Reza; Navidi, Fatemeh; Beard, Daniel A; Najarian, Kayvan

    2015-01-01

    The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.

  15. Mountain big sagebrush (Artemisia tridentata spp vaseyana) seed production

    Treesearch

    Melissa L. Landeen

    2015-01-01

    Big sagebrush (Artemisia tridentata Nutt.) is the most widespread and common shrub in the sagebrush biome of western North America. Of the three most common subspecies of big sagebrush (Artemisia tridentata), mountain big sagebrush (ssp. vaseyana; MBS) is the most resilient to disturbance, but still requires favorable climactic conditions and a viable post-...

  16. New Evidence on the Development of the Word "Big."

    ERIC Educational Resources Information Center

    Sena, Rhonda; Smith, Linda B.

    1990-01-01

    Results indicate that curvilinear trend in children's understanding of word "big" is not obtained in all stimulus contexts. This suggests that meaning and use of "big" is complex, and may not refer simply to larger objects in a set. Proposes that meaning of "big" constitutes a dynamic system driven by many perceptual,…

  17. Investigating Seed Longevity of Big Sagebrush (Artemisia tridentata)

    USGS Publications Warehouse

    Wijayratne, Upekala C.; Pyke, David A.

    2009-01-01

    The Intermountain West is dominated by big sagebrush communities (Artemisia tridentata subspecies) that provide habitat and forage for wildlife, prevent erosion, and are economically important to recreation and livestock industries. The two most prominent subspecies of big sagebrush in this region are Wyoming big sagebrush (A. t. ssp. wyomingensis) and mountain big sagebrush (A. t. ssp. vaseyana). Increased understanding of seed bank dynamics will assist with sustainable management and persistence of sagebrush communities. For example, mountain big sagebrush may be subjected to shorter fire return intervals and prescribed fire is a tool used often to rejuvenate stands and reduce tree (Juniperus sp. or Pinus sp.) encroachment into these communities. A persistent seed bank for mountain big sagebrush would be advantageous under these circumstances. Laboratory germination trials indicate that seed dormancy in big sagebrush may be habitat-specific, with collections from colder sites being more dormant. Our objective was to investigate seed longevity of both subspecies by evaluating viability of seeds in the field with a seed retrieval experiment and sampling for seeds in situ. We chose six study sites for each subspecies. These sites were dispersed across eastern Oregon, southern Idaho, northwestern Utah, and eastern Nevada. Ninety-six polyester mesh bags, each containing 100 seeds of a subspecies, were placed at each site during November 2006. Seed bags were placed in three locations: (1) at the soil surface above litter, (2) on the soil surface beneath litter, and (3) 3 cm below the soil surface to determine whether dormancy is affected by continued darkness or environmental conditions. Subsets of seeds were examined in April and November in both 2007 and 2008 to determine seed viability dynamics. Seed bank samples were taken at each site, separated into litter and soil fractions, and assessed for number of germinable seeds in a greenhouse. Community composition data

  18. Smart Information Management in Health Big Data.

    PubMed

    Muteba A, Eustache

    2017-01-01

    The smart information management system (SIMS) is concerned with the organization of anonymous patient records in a big data and their extraction in order to provide needful real-time intelligence. The purpose of the present study is to highlight the design and the implementation of the smart information management system. We emphasis, in one hand, the organization of a big data in flat file in simulation of nosql database, and in the other hand, the extraction of information based on lookup table and cache mechanism. The SIMS in the health big data aims the identification of new therapies and approaches to delivering care.

  19. Integrative methods for analyzing big data in precision medicine.

    PubMed

    Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša

    2016-03-01

    We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Big Dreams

    ERIC Educational Resources Information Center

    Benson, Michael T.

    2015-01-01

    The Keen Johnson Building is symbolic of Eastern Kentucky University's historic role as a School of Opportunity. It is a place that has inspired generations of students, many from disadvantaged backgrounds, to dream big dreams. The construction of the Keen Johnson Building was inspired by a desire to create a student union facility that would not…

  1. Translating Big Data into Smart Data for Veterinary Epidemiology.

    PubMed

    VanderWaal, Kimberly; Morrison, Robert B; Neuhauser, Claudia; Vilalta, Carles; Perez, Andres M

    2017-01-01

    The increasing availability and complexity of data has led to new opportunities and challenges in veterinary epidemiology around how to translate abundant, diverse, and rapidly growing "big" data into meaningful insights for animal health. Big data analytics are used to understand health risks and minimize the impact of adverse animal health issues through identifying high-risk populations, combining data or processes acting at multiple scales through epidemiological modeling approaches, and harnessing high velocity data to monitor animal health trends and detect emerging health threats. The advent of big data requires the incorporation of new skills into veterinary epidemiology training, including, for example, machine learning and coding, to prepare a new generation of scientists and practitioners to engage with big data. Establishing pipelines to analyze big data in near real-time is the next step for progressing from simply having "big data" to create "smart data," with the objective of improving understanding of health risks, effectiveness of management and policy decisions, and ultimately preventing or at least minimizing the impact of adverse animal health issues.

  2. Machine learning for Big Data analytics in plants.

    PubMed

    Ma, Chuang; Zhang, Hao Helen; Wang, Xiangfeng

    2014-12-01

    Rapid advances in high-throughput genomic technology have enabled biology to enter the era of 'Big Data' (large datasets). The plant science community not only needs to build its own Big-Data-compatible parallel computing and data management infrastructures, but also to seek novel analytical paradigms to extract information from the overwhelming amounts of data. Machine learning offers promising computational and analytical solutions for the integrative analysis of large, heterogeneous and unstructured datasets on the Big-Data scale, and is gradually gaining popularity in biology. This review introduces the basic concepts and procedures of machine-learning applications and envisages how machine learning could interface with Big Data technology to facilitate basic research and biotechnology in the plant sciences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Quality of Big Data in Healthcare

    DOE PAGES

    Sukumar, Sreenivas R.; Ramachandran, Natarajan; Ferrell, Regina Kay

    2015-01-01

    The current trend in Big Data Analytics and in particular Health information technology is towards building sophisticated models, methods and tools for business, operational and clinical intelligence, but the critical issue of data quality required for these models is not getting the attention it deserves. The objective of the paper is to highlight the issues of data quality in the context of Big Data Healthcare Analytics.

  4. Quality of Big Data in Healthcare

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R.; Ramachandran, Natarajan; Ferrell, Regina Kay

    The current trend in Big Data Analytics and in particular Health information technology is towards building sophisticated models, methods and tools for business, operational and clinical intelligence, but the critical issue of data quality required for these models is not getting the attention it deserves. The objective of the paper is to highlight the issues of data quality in the context of Big Data Healthcare Analytics.

  5. Database Resources of the BIG Data Center in 2018.

    PubMed

    2018-01-04

    The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Database Resources of the BIG Data Center in 2018

    PubMed Central

    Xu, Xingjian; Hao, Lili; Zhu, Junwei; Tang, Bixia; Zhou, Qing; Song, Fuhai; Chen, Tingting; Zhang, Sisi; Dong, Lili; Lan, Li; Wang, Yanqing; Sang, Jian; Hao, Lili; Liang, Fang; Cao, Jiabao; Liu, Fang; Liu, Lin; Wang, Fan; Ma, Yingke; Xu, Xingjian; Zhang, Lijuan; Chen, Meili; Tian, Dongmei; Li, Cuiping; Dong, Lili; Du, Zhenglin; Yuan, Na; Zeng, Jingyao; Zhang, Zhewen; Wang, Jinyue; Shi, Shuo; Zhang, Yadong; Pan, Mengyu; Tang, Bixia; Zou, Dong; Song, Shuhui; Sang, Jian; Xia, Lin; Wang, Zhennan; Li, Man; Cao, Jiabao; Niu, Guangyi; Zhang, Yang; Sheng, Xin; Lu, Mingming; Wang, Qi; Xiao, Jingfa; Zou, Dong; Wang, Fan; Hao, Lili; Liang, Fang; Li, Mengwei; Sun, Shixiang; Zou, Dong; Li, Rujiao; Yu, Chunlei; Wang, Guangyu; Sang, Jian; Liu, Lin; Li, Mengwei; Li, Man; Niu, Guangyi; Cao, Jiabao; Sun, Shixiang; Xia, Lin; Yin, Hongyan; Zou, Dong; Xu, Xingjian; Ma, Lina; Chen, Huanxin; Sun, Yubin; Yu, Lei; Zhai, Shuang; Sun, Mingyuan; Zhang, Zhang; Zhao, Wenming; Xiao, Jingfa; Bao, Yiming; Song, Shuhui; Hao, Lili; Li, Rujiao; Ma, Lina; Sang, Jian; Wang, Yanqing; Tang, Bixia; Zou, Dong; Wang, Fan

    2018-01-01

    Abstract The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. PMID:29036542

  7. The BIG Data Center: from deposition to integration to translation

    PubMed Central

    2017-01-01

    Biological data are generated at unprecedentedly exponential rates, posing considerable challenges in big data deposition, integration and translation. The BIG Data Center, established at Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, provides a suite of database resources, including (i) Genome Sequence Archive, a data repository specialized for archiving raw sequence reads, (ii) Gene Expression Nebulas, a data portal of gene expression profiles based entirely on RNA-Seq data, (iii) Genome Variation Map, a comprehensive collection of genome variations for featured species, (iv) Genome Warehouse, a centralized resource housing genome-scale data with particular focus on economically important animals and plants, (v) Methylation Bank, an integrated database of whole-genome single-base resolution methylomes and (vi) Science Wikis, a central access point for biological wikis developed for community annotations. The BIG Data Center is dedicated to constructing and maintaining biological databases through big data integration and value-added curation, conducting basic research to translate big data into big knowledge and providing freely open access to a variety of data resources in support of worldwide research activities in both academia and industry. All of these resources are publicly available and can be found at http://bigd.big.ac.cn. PMID:27899658

  8. The BIG Data Center: from deposition to integration to translation.

    PubMed

    2017-01-04

    Biological data are generated at unprecedentedly exponential rates, posing considerable challenges in big data deposition, integration and translation. The BIG Data Center, established at Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, provides a suite of database resources, including (i) Genome Sequence Archive, a data repository specialized for archiving raw sequence reads, (ii) Gene Expression Nebulas, a data portal of gene expression profiles based entirely on RNA-Seq data, (iii) Genome Variation Map, a comprehensive collection of genome variations for featured species, (iv) Genome Warehouse, a centralized resource housing genome-scale data with particular focus on economically important animals and plants, (v) Methylation Bank, an integrated database of whole-genome single-base resolution methylomes and (vi) Science Wikis, a central access point for biological wikis developed for community annotations. The BIG Data Center is dedicated to constructing and maintaining biological databases through big data integration and value-added curation, conducting basic research to translate big data into big knowledge and providing freely open access to a variety of data resources in support of worldwide research activities in both academia and industry. All of these resources are publicly available and can be found at http://bigd.big.ac.cn. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Application and Exploration of Big Data Mining in Clinical Medicine

    PubMed Central

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-01-01

    Objective: To review theories and technologies of big data mining and their application in clinical medicine. Data Sources: Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Study Selection: Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. Results: This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster–Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Conclusion: Big data mining has the potential to play an important role in clinical medicine. PMID:26960378

  10. Rethinking big data: A review on the data quality and usage issues

    NASA Astrophysics Data System (ADS)

    Liu, Jianzheng; Li, Jie; Li, Weifeng; Wu, Jiansheng

    2016-05-01

    The recent explosive publications of big data studies have well documented the rise of big data and its ongoing prevalence. Different types of ;big data; have emerged and have greatly enriched spatial information sciences and related fields in terms of breadth and granularity. Studies that were difficult to conduct in the past time due to data availability can now be carried out. However, big data brings lots of ;big errors; in data quality and data usage, which cannot be used as a substitute for sound research design and solid theories. We indicated and summarized the problems faced by current big data studies with regard to data collection, processing and analysis: inauthentic data collection, information incompleteness and noise of big data, unrepresentativeness, consistency and reliability, and ethical issues. Cases of empirical studies are provided as evidences for each problem. We propose that big data research should closely follow good scientific practice to provide reliable and scientific ;stories;, as well as explore and develop techniques and methods to mitigate or rectify those 'big-errors' brought by big data.

  11. Analyzing big data with the hybrid interval regression methods.

    PubMed

    Huang, Chia-Hui; Yang, Keng-Chieh; Kao, Han-Ying

    2014-01-01

    Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.

  12. Analyzing Big Data with the Hybrid Interval Regression Methods

    PubMed Central

    Kao, Han-Ying

    2014-01-01

    Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes. PMID:25143968

  13. Big-bang nucleosynthesis revisited

    NASA Technical Reports Server (NTRS)

    Olive, Keith A.; Schramm, David N.; Steigman, Gary; Walker, Terry P.

    1989-01-01

    The homogeneous big-bang nucleosynthesis yields of D, He-3, He-4, and Li-7 are computed taking into account recent measurements of the neutron mean-life as well as updates of several nuclear reaction rates which primarily affect the production of Li-7. The extraction of primordial abundances from observation and the likelihood that the primordial mass fraction of He-4, Y(sub p) is less than or equal to 0.24 are discussed. Using the primordial abundances of D + He-3 and Li-7 we limit the baryon-to-photon ratio (eta in units of 10 exp -10) 2.6 less than or equal to eta(sub 10) less than or equal to 4.3; which we use to argue that baryons contribute between 0.02 and 0.11 to the critical energy density of the universe. An upper limit to Y(sub p) of 0.24 constrains the number of light neutrinos to N(sub nu) less than or equal to 3.4, in excellent agreement with the LEP and SLC collider results. We turn this argument around to show that the collider limit of 3 neutrino species can be used to bound the primordial abundance of He-4: 0.235 less than or equal to Y(sub p) less than or equal to 0.245.

  14. Processing Solutions for Big Data in Astronomy

    NASA Astrophysics Data System (ADS)

    Fillatre, L.; Lepiller, D.

    2016-09-01

    This paper gives a simple introduction to processing solutions applied to massive amounts of data. It proposes a general presentation of the Big Data paradigm. The Hadoop framework, which is considered as the pioneering processing solution for Big Data, is described together with YARN, the integrated Hadoop tool for resource allocation. This paper also presents the main tools for the management of both the storage (NoSQL solutions) and computing capacities (MapReduce parallel processing schema) of a cluster of machines. Finally, more recent processing solutions like Spark are discussed. Big Data frameworks are now able to run complex applications while keeping the programming simple and greatly improving the computing speed.

  15. "Small Steps, Big Rewards": Preventing Type 2 Diabetes

    MedlinePlus

    ... please turn Javascript on. Feature: Diabetes "Small Steps, Big Rewards": Preventing Type 2 Diabetes Past Issues / Fall ... These are the plain facts in "Small Steps. Big Rewards: Prevent Type 2 Diabetes," an education campaign ...

  16. Big Bend National Park

    NASA Image and Video Library

    2017-12-08

    Alternately known as a geologist’s paradise and a geologist’s nightmare, Big Bend National Park in southwestern Texas offers a multitude of rock formations. Sparse vegetation makes finding and observing the rocks easy, but they document a complicated geologic history extending back 500 million years. On May 10, 2002, the Enhanced Thematic Mapper Plus on NASA’s Landsat 7 satellite captured this natural-color image of Big Bend National Park. A black line delineates the park perimeter. The arid landscape appears in muted earth tones, some of the darkest hues associated with volcanic structures, especially the Rosillos and Chisos Mountains. Despite its bone-dry appearance, Big Bend National Park is home to some 1,200 plant species, and hosts more kinds of cacti, birds, and bats than any other U.S. national park. Read more: go.nasa.gov/2bzGaZU Credit: NASA/Landsat7 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  17. Semantic Web technologies for the big data in life sciences.

    PubMed

    Wu, Hongyan; Yamaguchi, Atsuko

    2014-08-01

    The life sciences field is entering an era of big data with the breakthroughs of science and technology. More and more big data-related projects and activities are being performed in the world. Life sciences data generated by new technologies are continuing to grow in not only size but also variety and complexity, with great speed. To ensure that big data has a major influence in the life sciences, comprehensive data analysis across multiple data sources and even across disciplines is indispensable. The increasing volume of data and the heterogeneous, complex varieties of data are two principal issues mainly discussed in life science informatics. The ever-evolving next-generation Web, characterized as the Semantic Web, is an extension of the current Web, aiming to provide information for not only humans but also computers to semantically process large-scale data. The paper presents a survey of big data in life sciences, big data related projects and Semantic Web technologies. The paper introduces the main Semantic Web technologies and their current situation, and provides a detailed analysis of how Semantic Web technologies address the heterogeneous variety of life sciences big data. The paper helps to understand the role of Semantic Web technologies in the big data era and how they provide a promising solution for the big data in life sciences.

  18. Big data analytics to aid developing livable communities.

    DOT National Transportation Integrated Search

    2015-12-31

    In transportation, ubiquitous deployment of low-cost sensors combined with powerful : computer hardware and high-speed network makes big data available. USDOT defines big : data research in transportation as a number of advanced techniques applied to...

  19. Ontogeny of Big endothelin-1 effects in newborn piglet pulmonary vasculature.

    PubMed

    Liben, S; Stewart, D J; De Marte, J; Perreault, T

    1993-07-01

    Endothelin-1 (ET-1), a 21-amino acid peptide produced by endothelial cells, results from the cleavage of preproendothelin, generating Big ET-1, which is then cleaved by the ET-converting enzyme (ECE) to form ET-1. Big ET-1, like ET-1, is released by endothelial cells. Big ET-1 is equipotent to ET-1 in vivo, whereas its vasoactive effects are less in vitro. It has been suggested that the effects of Big ET-1 depend on its conversion to ET-1. ET-1 has potent vasoactive effects in the newborn pig pulmonary circulation, however, the effects of Big ET-1 remain unknown. Therefore, we studied the effects of Big ET-1 in isolated perfused lungs from 1- and 7-day-old piglets using the ECE inhibitor, phosphoramidon, and the ETA receptor antagonist, BQ-123Na. The rate of conversion of Big ET-1 to ET-1 was measured using radioimmunoassay. ET-1 (10(-13) to 10(-8) M) produced an initial vasodilation, followed by a dose-dependent potent vasoconstriction (P < 0.001), which was equal at both ages. Big ET-1 (10(-11) to 10(-8) M) also produced a dose-dependent vasoconstriction (P < 0.001). The constrictor effects of Big ET-1 and ET-1 were similar in the 1-day-old, whereas in the 7-day-old, the constrictor effect of Big ET-1 was less than that of ET-1 (P < 0.017).(ABSTRACT TRUNCATED AT 250 WORDS)

  20. Geology and ground-water resources of the Big Sandy Creek Valley, Lincoln, Cheyenne, and Kiowa Counties, Colorado; with a section on Chemical quality of the ground water

    USGS Publications Warehouse

    Coffin, Donald L.; Horr, Clarence Albert

    1967-01-01

    This report describes the geology and ground-water resources of that part of the Big Sandy Creek valley from about 6 miles east of Limon, Colo., downstream to the Kiowa County and Prowers County line, an area of about 1,400 square miles. The valley is drained by Big Sandy Creek and its principal tributary, Rush Creek. The land surface ranges from flat to rolling; the most irregular topography is in the sandhills south and west of Big Sandy Creek. Farming and livestock raising are the principal occupations. Irrigated lands constitute only a sin311 part of the project area, but during the last 15 years irrigation has expanded. Exposed rocks range in age from Late Cretaceous to Recent. They comprise the Carlile Shale, Niobrara Formations, Pierre Shale (all Late Cretaceous), upland deposits (Pleistocene), valley-fill deposits (Pleistocene and Recent), and dune sand (Pleistocene and Recent). Because the Upper Cretaceous formations are relatively impermeable and inhibit water movement, they allow ground water to accumul3te in the overlying unconsolidated Pleistocene and Recent deposits. The valley-fill deposits constitute the major aquifer and yield as much as 800 gpm (gallons per mixture) to wells along Big Sandy and Rush Creeks. Transmissibilities average about 45,000 gallons per day per foot. Maximum well yields in the tributary valleys are about 200 gpm and average 5 to 10 gpm. The dune sand and upland deposits generally are drained and yield water to wells in only a few places. The ground-water reservoir is recharged only from direct infiltration of precipitation, which annually averages about 12 inches for the entire basin, and from infiltration of floodwater. Floods in the ephemeral Big Sandy Creek are a major source of recharge to ground-water reservoirs. Observations of a flood near Kit Carson indicated that about 3 acre-feet of runoff percolated into the ground-water reservoir through each acre of the wetted stream channel The downstream decrease in channel and

  1. Infrastructure for Big Data in the Intensive Care Unit.

    PubMed

    Zelechower, Javier; Astudillo, José; Traversaro, Francisco; Redelico, Francisco; Luna, Daniel; Quiros, Fernan; San Roman, Eduardo; Risk, Marcelo

    2017-01-01

    The Big Data paradigm can be applied in intensive care unit, in order to improve the treatment of the patients, with the aim of customized decisions. This poster is about the infrastructure necessary to built a Big Data system for the ICU. Together with the infrastructure, the conformation of a multidisciplinary team is essential to develop Big Data to use in critical care medicine.

  2. 76 FR 7837 - Big Rivers Electric Corporation; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-11

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. NJ11-11-000] Big Rivers Electric Corporation; Notice of Filing Take notice that on February 4, 2011, Big Rivers Electric Corporation (Big Rivers) filed a notice of cancellation of its Second Revised and Restated Open Access...

  3. The Z {yields} cc-bar {yields} {gamma}{gamma}*, Z {yields} bb-bar {yields} {gamma}{gamma}* triangle diagrams and the Z {yields} {gamma}{psi}, Z {yields} {gamma}Y decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Achasov, N. N., E-mail: achasov@math.nsc.ru

    2011-03-15

    The approach to the Z {yields} {gamma}{psi} and Z {yields} {gamma}Y decay study is presented in detail, based on the sum rules for the Z {yields} cc-bar {yields} {gamma}{gamma}* and Z {yields} bb-bar {yields} {gamma}{gamma}* amplitudes and their derivatives. The branching ratios of the Z {yields} {gamma}{psi} and Z {yields} {gamma}Y decays are calculated for different hypotheses on saturation of the sum rules. The lower bounds of {Sigma}{sub {psi}} BR(Z {yields} {gamma}{psi}) = 1.95 Multiplication-Sign 10{sup -7} and {Sigma}{sub {upsilon}} BR(Z {yields} {gamma}Y) = 7.23 Multiplication-Sign 10{sup -7} are found. Deviations from the lower bounds are discussed, including the possibilitymore » of BR(Z {yields} {gamma}J/{psi}(1S)) {approx} BR(Z {yields} {gamma}Y(1S)) {approx} 10{sup -6}, that could be probably measured in LHC. The angular distributions in the Z {yields} {gamma}{psi} and Z {yields} {gamma}Y decays are also calculated.« less

  4. Data management by using R: big data clinical research series.

    PubMed

    Zhang, Zhongheng

    2015-11-01

    Electronic medical record (EMR) system has been widely used in clinical practice. Instead of traditional record system by hand writing and recording, the EMR makes big data clinical research feasible. The most important feature of big data research is its real-world setting. Furthermore, big data research can provide all aspects of information related to healthcare. However, big data research requires some skills on data management, which however, is always lacking in the curriculum of medical education. This greatly hinders doctors from testing their clinical hypothesis by using EMR. To make ends meet, a series of articles introducing data management techniques are put forward to guide clinicians to big data clinical research. The present educational article firstly introduces some basic knowledge on R language, followed by some data management skills on creating new variables, recoding variables and renaming variables. These are very basic skills and may be used in every project of big data research.

  5. (Quasi)-convexification of Barta's (multi-extrema) bounding theorem: Inf_x\\big(\\ssty\\frac{H\\Phi(x)}{\\Phi(x)} \\big) \\le E_gr \\le Sup_x \\big(\\ssty\\frac{H\\Phi(x)}{\\Phi(x)} \\big)

    NASA Astrophysics Data System (ADS)

    Handy, C. R.

    2006-03-01

    There has been renewed interest in the exploitation of Barta's configuration space theorem (BCST) (Barta 1937 C. R. Acad. Sci. Paris 204 472) which bounds the ground-state energy, Inf_x\\big({{H\\Phi(x)}\\over {\\Phi(x)}} \\big ) \\leq E_gr \\leq Sup_x \\big({{H\\Phi(x)}\\over {\\Phi(x)}}\\big) , by using any Φ lying within the space of positive, bounded, and sufficiently smooth functions, {\\cal C} . Mouchet's (Mouchet 2005 J. Phys. A: Math. Gen. 38 1039) BCST analysis is based on gradient optimization (GO). However, it overlooks significant difficulties: (i) appearance of multi-extrema; (ii) inefficiency of GO for stiff (singular perturbation/strong coupling) problems; (iii) the nonexistence of a systematic procedure for arbitrarily improving the bounds within {\\cal C} . These deficiencies can be corrected by transforming BCST into a moments' representation equivalent, and exploiting a generalization of the eigenvalue moment method (EMM), within the context of the well-known generalized eigenvalue problem (GEP), as developed here. EMM is an alternative eigenenergy bounding, variational procedure, overlooked by Mouchet, which also exploits the positivity of the desired physical solution. Furthermore, it is applicable to Hermitian and non-Hermitian systems with complex-number quantization parameters (Handy and Bessis 1985 Phys. Rev. Lett. 55 931, Handy et al 1988 Phys. Rev. Lett. 60 253, Handy 2001 J. Phys. A: Math. Gen. 34 5065, Handy et al 2002 J. Phys. A: Math. Gen. 35 6359). Our analysis exploits various quasi-convexity/concavity theorems common to the GEP representation. We outline the general theory, and present some illustrative examples.

  6. The Virtual PM

    DTIC Science & Technology

    2013-11-01

    move. Fortunately, I am not required (nor do I chose) to be a “ road warrior,” so I can support the project team from home base while the PM takes care...9820 Belvoir Road ,Fort Belvoir,VA,22060-5565 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10...result in big dividends down the road when his team members have to handle such situations on their own. It will take some of the stress out of the PM’s

  7. Keeping up with Big Data--Designing an Introductory Data Analytics Class

    ERIC Educational Resources Information Center

    Hijazi, Sam

    2016-01-01

    Universities need to keep up with the demand of the business world when it comes to Big Data. The exponential increase in data has put additional demands on academia to meet the big gap in education. Business demand for Big Data has surpassed 1.9 million positions in 2015. Big Data, Business Intelligence, Data Analytics, and Data Mining are the…

  8. [Applications of eco-environmental big data: Progress and prospect].

    PubMed

    Zhao, Miao Miao; Zhao, Shi Cheng; Zhang, Li Yun; Zhao, Fen; Shao, Rui; Liu, Li Xiang; Zhao, Hai Feng; Xu, Ming

    2017-05-18

    With the advance of internet and wireless communication technology, the fields of ecology and environment have entered a new digital era with the amount of data growing explosively and big data technologies attracting more and more attention. The eco-environmental big data is based airborne and space-/land-based observations of ecological and environmental factors and its ultimate goal is to integrate multi-source and multi-scale data for information mining by taking advantages of cloud computation, artificial intelligence, and modeling technologies. In comparison with other fields, the eco-environmental big data has its own characteristics, such as diverse data formats and sources, data collected with various protocols and standards, and serving different clients and organizations with special requirements. Big data technology has been applied worldwide in ecological and environmental fields including global climate prediction, ecological network observation and modeling, and regional air pollution control. The development of eco-environmental big data in China is facing many problems, such as data sharing issues, outdated monitoring facilities and techno-logies, and insufficient data mining capacity. Despite all this, big data technology is critical to solving eco-environmental problems, improving prediction and warning accuracy on eco-environmental catastrophes, and boosting scientific research in the field in China. We expected that the eco-environmental big data would contribute significantly to policy making and environmental services and management, and thus the sustainable development and eco-civilization construction in China in the coming decades.

  9. Big system: Interactive graphics for the engineer

    NASA Technical Reports Server (NTRS)

    Quenneville, C. E.

    1975-01-01

    The BCS Interactive Graphics System (BIG System) approach to graphics was presented, along with several significant engineering applications. The BIG System precompiler, the graphics support library, and the function requirements of graphics applications are discussed. It was concluded that graphics standardization and a device independent code can be developed to assure maximum graphic terminal transferability.

  10. Insights into big sagebrush seedling storage practices

    Treesearch

    Emily C. Overton; Jeremiah R. Pinto; Anthony S. Davis

    2013-01-01

    Big sagebrush (Artemisia tridentata Nutt. [Asteraceae]) is an essential component of shrub-steppe ecosystems in the Great Basin of the US, where degradation due to altered fire regimes, invasive species, and land use changes have led to increased interest in the production of high-quality big sagebrush seedlings for conservation and restoration projects. Seedling...

  11. Principles of Experimental Design for Big Data Analysis.

    PubMed

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2017-08-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.

  12. Principles of Experimental Design for Big Data Analysis

    PubMed Central

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2016-01-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis. PMID:28883686

  13. Big Data and Nursing: Implications for the Future.

    PubMed

    Topaz, Maxim; Pruinelli, Lisiane

    2017-01-01

    Big data is becoming increasingly more prevalent and it affects the way nurses learn, practice, conduct research and develop policy. The discipline of nursing needs to maximize the benefits of big data to advance the vision of promoting human health and wellbeing. However, current practicing nurses, educators and nurse scientists often lack the required skills and competencies necessary for meaningful use of big data. Some of the key skills for further development include the ability to mine narrative and structured data for new care or outcome patterns, effective data visualization techniques, and further integration of nursing sensitive data into artificial intelligence systems for better clinical decision support. We provide growth-path vision recommendations for big data competencies for practicing nurses, nurse educators, researchers, and policy makers to help prepare the next generation of nurses and improve patient outcomes trough better quality connected health.

  14. Information Retrieval Using Hadoop Big Data Analysis

    NASA Astrophysics Data System (ADS)

    Motwani, Deepak; Madan, Madan Lal

    This paper concern on big data analysis which is the cognitive operation of probing huge amounts of information in an attempt to get uncovers unseen patterns. Through Big Data Analytics Applications such as public and private organization sectors have formed a strategic determination to turn big data into cut throat benefit. The primary occupation of extracting value from big data give rise to a process applied to pull information from multiple different sources; this process is known as extract transforms and lode. This paper approach extract information from log files and Research Paper, awareness reduces the efforts for blueprint finding and summarization of document from several positions. The work is able to understand better Hadoop basic concept and increase the user experience for research. In this paper, we propose an approach for analysis log files for finding concise information which is useful and time saving by using Hadoop. Our proposed approach will be applied on different research papers on a specific domain and applied for getting summarized content for further improvement and make the new content.

  15. Big Science and the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Giudice, Gian Francesco

    2012-03-01

    The Large Hadron Collider (LHC), the particle accelerator operating at CERN, is probably the most complex and ambitious scientific project ever accomplished by humanity. The sheer size of the enterprise, in terms of financial and human resources, naturally raises the question whether society should support such costly basic-research programs. I address this question by first reviewing the process that led to the emergence of Big Science and the role of large projects in the development of science and technology. I then compare the methodologies of Small and Big Science, emphasizing their mutual linkage. Finally, after examining the cost of Big Science projects, I highlight several general aspects of their beneficial implications for society.

  16. The big data processing platform for intelligent agriculture

    NASA Astrophysics Data System (ADS)

    Huang, Jintao; Zhang, Lichen

    2017-08-01

    Big data technology is another popular technology after the Internet of Things and cloud computing. Big data is widely used in many fields such as social platform, e-commerce, and financial analysis and so on. Intelligent agriculture in the course of the operation will produce large amounts of data of complex structure, fully mining the value of these data for the development of agriculture will be very meaningful. This paper proposes an intelligent data processing platform based on Storm and Cassandra to realize the storage and management of big data of intelligent agriculture.

  17. Research Activities at Fermilab for Big Data Movement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mhashilkar, Parag; Wu, Wenji; Kim, Hyun W

    2013-01-01

    Adaptation of 100GE Networking Infrastructure is the next step towards management of Big Data. Being the US Tier-1 Center for the Large Hadron Collider's (LHC) Compact Muon Solenoid (CMS) experiment and the central data center for several other large-scale research collaborations, Fermilab has to constantly deal with the scaling and wide-area distribution challenges of the big data. In this paper, we will describe some of the challenges involved in the movement of big data over 100GE infrastructure and the research activities at Fermilab to address these challenges.

  18. Parallel Processing of Big Point Clouds Using Z-Order Partitioning

    NASA Astrophysics Data System (ADS)

    Alis, C.; Boehm, J.; Liu, K.

    2016-06-01

    As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112) is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest neighbour algorithm

  19. [Utilization of Big Data in Medicine and Future Outlook].

    PubMed

    Kinosada, Yasutomi; Uematsu, Machiko; Fujiwara, Takuya

    2016-03-01

    "Big data" is a new buzzword. The point is not to be dazzled by the volume of data, but rather to analyze it, and convert it into insights, innovations, and business value. There are also real differences between conventional analytics and big data. In this article, we show some results of big data analysis using open DPC (Diagnosis Procedure Combination) data in areas of the central part of JAPAN: Toyama, Ishikawa, Fukui, Nagano, Gifu, Aichi, Shizuoka, and Mie Prefectures. These 8 prefectures contain 51 medical administration areas called the second medical area. By applying big data analysis techniques such as k-means, hierarchical clustering, and self-organizing maps to DPC data, we can visualize the disease structure and detect similarities or variations among the 51 second medical areas. The combination of a big data analysis technique and open DPC data is a very powerful method to depict real figures on patient distribution in Japan.

  20. Research on Technology Innovation Management in Big Data Environment

    NASA Astrophysics Data System (ADS)

    Ma, Yanhong

    2018-02-01

    With the continuous development and progress of the information age, the demand for information is getting larger. The processing and analysis of information data is also moving toward the direction of scale. The increasing number of information data makes people have higher demands on processing technology. The explosive growth of information data onto the current society have prompted the advent of the era of big data. At present, people have more value and significance in producing and processing various kinds of information and data in their lives. How to use big data technology to process and analyze information data quickly to improve the level of big data management is an important stage to promote the current development of information and data processing technology in our country. To some extent, innovative research on the management methods of information technology in the era of big data can enhance our overall strength and make China be an invincible position in the development of the big data era.

  1. Association of Big Endothelin-1 with Coronary Artery Calcification.

    PubMed

    Qing, Ping; Li, Xiao-Lin; Zhang, Yan; Li, Yi-Lin; Xu, Rui-Xia; Guo, Yuan-Lin; Li, Sha; Wu, Na-Qiong; Li, Jian-Jun

    2015-01-01

    The coronary artery calcification (CAC) is clinically considered as one of the important predictors of atherosclerosis. Several studies have confirmed that endothelin-1(ET-1) plays an important role in the process of atherosclerosis formation. The aim of this study was to investigate whether big ET-1 is associated with CAC. A total of 510 consecutively admitted patients from February 2011 to May 2012 in Fu Wai Hospital were analyzed. All patients had received coronary computed tomography angiography and then divided into two groups based on the results of coronary artery calcium score (CACS). The clinical characteristics including traditional and calcification-related risk factors were collected and plasma big ET-1 level was measured by ELISA. Patients with CAC had significantly elevated big ET-1 level compared with those without CAC (0.5 ± 0.4 vs. 0.2 ± 0.2, P<0.001). In the multivariate analysis, big ET-1 (Tertile 2, HR = 3.09, 95% CI 1.66-5.74, P <0.001, Tertile3 HR = 10.42, 95% CI 3.62-29.99, P<0.001) appeared as an independent predictive factor of the presence of CAC. There was a positive correlation of the big ET-1 level with CACS (r = 0.567, p<0.001). The 10-year Framingham risk (%) was higher in the group with CACS>0 and the highest tertile of big ET-1 (P<0.01). The area under the receiver operating characteristic curve for the big ET-1 level in predicting CAC was 0.83 (95% CI 0.79-0.87, p<0.001), with a sensitivity of 70.6% and specificity of 87.7%. The data firstly demonstrated that the plasma big ET-1 level was a valuable independent predictor for CAC in our study.

  2. Meta-analyses of Big Six Interests and Big Five Personality Factors.

    ERIC Educational Resources Information Center

    Larson, Lisa M.; Rottinghaus, Patrick J.; Borgen, Fred H.

    2002-01-01

    Meta-analysis of 24 samples demonstrated overlap between Holland's vocational interest domains (measured by Self Directed Search, Strong Interest Inventory, and Vocational Preference Inventory) and Big Five personality factors (measured by Revised NEO Personalty Inventory). The link is stronger for five interest-personality pairs:…

  3. [Big Data- challenges and risks].

    PubMed

    Krauß, Manuela; Tóth, Tamás; Hanika, Heinrich; Kozlovszky, Miklós; Dinya, Elek

    2015-12-06

    The term "Big Data" is commonly used to describe the growing mass of information being created recently. New conclusions can be drawn and new services can be developed by the connection, processing and analysis of these information. This affects all aspects of life, including health and medicine. The authors review the application areas of Big Data, and present examples from health and other areas. However, there are several preconditions of the effective use of the opportunities: proper infrastructure, well defined regulatory environment with particular emphasis on data protection and privacy. These issues and the current actions for solution are also presented.

  4. Big game habitat use in southeastern Montana

    Treesearch

    James G. MacCracken; Daniel W. Uresk

    1984-01-01

    The loss of suitable, high quality habitat is a major problem facing big game managers in the western United States. Agricultural, water, road and highway, housing, and recreational development have contributed to loss of natural big game habitat (Wallmo et al. 1976, Reed 1981). In the western United States, surface mining of minerals has great potential to adversely...

  5. A Big Data Analytics Methodology Program in the Health Sector

    ERIC Educational Resources Information Center

    Lawler, James; Joseph, Anthony; Howell-Barber, H.

    2016-01-01

    The benefits of Big Data Analytics are cited frequently in the literature. However, the difficulties of implementing Big Data Analytics can limit the number of organizational projects. In this study, the authors evaluate business, procedural and technical factors in the implementation of Big Data Analytics, applying a methodology program. Focusing…

  6. View of New Big Oak Flat Road seen from Old ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of New Big Oak Flat Road seen from Old Wawona Road near location of photograph HAER CA-148-17. Note road cuts, alignment, and tunnels. Devils Dance Floor at left distance. Looking northwest - Big Oak Flat Road, Between Big Oak Flat Entrance & Merced River, Yosemite Village, Mariposa County, CA

  7. The Study of “big data” to support internal business strategists

    NASA Astrophysics Data System (ADS)

    Ge, Mei

    2018-01-01

    How is big data different from previous data analysis systems? The primary purpose behind traditional small data analytics that all managers are more or less familiar with is to support internal business strategies. But big data also offers a promising new dimension: to discover new opportunities to offer customers high-value products and services. The study focus to introduce some strategists which big data support to. Business decisions using big data can also involve some areas for analytics. They include customer satisfaction, customer journeys, supply chains, risk management, competitive intelligence, pricing, discovery and experimentation or facilitating big data discovery.

  8. Occurrence and Partial Characterization of Lettuce big vein associated virus and Mirafiori lettuce big vein virus in Lettuce in Iran.

    PubMed

    Alemzadeh, E; Izadpanah, K

    2012-12-01

    Mirafiori lettuce big vein virus (MiLBVV) and lettuce big vein associated virus (LBVaV) were found in association with big vein disease of lettuce in Iran. Analysis of part of the coat protein (CP) gene of Iranian isolates of LBVaV showed 97.1-100 % nucleotide sequence identity with other LBVaV isolates. Iranian isolates of MiLBVV belonged to subgroup A and showed 88.6-98.8 % nucleotide sequence identity with other isolates of this virus when amplified by PCR primer pair MiLV VP. The occurrence of both viruses in lettuce crop was associated with the presence of resting spores and zoosporangia of the fungus Olpidium brassicae in lettuce roots under field and greenhouse conditions. Two months after sowing lettuce seed in soil collected from a lettuce field with big vein affected plants, all seedlings were positive for LBVaV and MiLBVV, indicating soil transmission of both viruses.

  9. [Contemplation on the application of big data in clinical medicine].

    PubMed

    Lian, Lei

    2015-01-01

    Medicine is another area where big data is being used. The link between clinical treatment and outcome is the key step when applying big data in medicine. In the era of big data, it is critical to collect complete outcome data. Patient follow-up, comprehensive integration of data resources, quality control and standardized data management are the predominant approaches to avoid missing data and data island. Therefore, establishment of systemic patients follow-up protocol and prospective data management strategy are the important aspects of big data in medicine.

  10. Cincinnati Big Area Additive Manufacturing (BAAM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duty, Chad E.; Love, Lonnie J.

    Oak Ridge National Laboratory (ORNL) worked with Cincinnati Incorporated (CI) to demonstrate Big Area Additive Manufacturing which increases the speed of the additive manufacturing (AM) process by over 1000X, increases the size of parts by over 10X and shows a cost reduction of over 100X. ORNL worked with CI to transition the Big Area Additive Manufacturing (BAAM) technology from a proof-of-principle (TRL 2-3) demonstration to a prototype product stage (TRL 7-8).

  11. Solution structure of leptospiral LigA4 Big domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Song; Zhang, Jiahai; Zhang, Xuecheng

    Pathogenic Leptospiraspecies express immunoglobulin-like proteins which serve as adhesins to bind to the extracellular matrices of host cells. Leptospiral immunoglobulin-like protein A (LigA), a surface exposed protein containing tandem repeats of bacterial immunoglobulin-like (Big) domains, has been proved to be involved in the interaction of pathogenic Leptospira with mammalian host. In this study, the solution structure of the fourth Big domain of LigA (LigA4 Big domain) from Leptospira interrogans was solved by nuclear magnetic resonance (NMR). The structure of LigA4 Big domain displays a similar bacterial immunoglobulin-like fold compared with other Big domains, implying some common structural aspects of Bigmore » domain family. On the other hand, it displays some structural characteristics significantly different from classic Ig-like domain. Furthermore, Stains-all assay and NMR chemical shift perturbation revealed the Ca{sup 2+} binding property of LigA4 Big domain. - Highlights: • Determining the solution structure of a bacterial immunoglobulin-like domain from a surface protein of Leptospira. • The solution structure shows some structural characteristics significantly different from the classic Ig-like domains. • A potential Ca{sup 2+}-binding site was identified by strains-all and NMR chemical shift perturbation.« less

  12. Informatics in neurocritical care: new ideas for Big Data.

    PubMed

    Flechet, Marine; Grandas, Fabian Güiza; Meyfroidt, Geert

    2016-04-01

    Big data is the new hype in business and healthcare. Data storage and processing has become cheap, fast, and easy. Business analysts and scientists are trying to design methods to mine these data for hidden knowledge. Neurocritical care is a field that typically produces large amounts of patient-related data, and these data are increasingly being digitized and stored. This review will try to look beyond the hype, and focus on possible applications in neurointensive care amenable to Big Data research that can potentially improve patient care. The first challenge in Big Data research will be the development of large, multicenter, and high-quality databases. These databases could be used to further investigate recent findings from mathematical models, developed in smaller datasets. Randomized clinical trials and Big Data research are complementary. Big Data research might be used to identify subgroups of patients that could benefit most from a certain intervention, or can be an alternative in areas where randomized clinical trials are not possible. The processing and the analysis of the large amount of patient-related information stored in clinical databases is beyond normal human cognitive ability. Big Data research applications have the potential to discover new medical knowledge, and improve care in the neurointensive care unit.

  13. Big Data Analytics for Genomic Medicine.

    PubMed

    He, Karen Y; Ge, Dongliang; He, Max M

    2017-02-15

    Genomic medicine attempts to build individualized strategies for diagnostic or therapeutic decision-making by utilizing patients' genomic information. Big Data analytics uncovers hidden patterns, unknown correlations, and other insights through examining large-scale various data sets. While integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a Big Data infrastructure exhibit challenges, they also provide a feasible opportunity to develop an efficient and effective approach to identify clinically actionable genetic variants for individualized diagnosis and therapy. In this paper, we review the challenges of manipulating large-scale next-generation sequencing (NGS) data and diverse clinical data derived from the EHRs for genomic medicine. We introduce possible solutions for different challenges in manipulating, managing, and analyzing genomic and clinical data to implement genomic medicine. Additionally, we also present a practical Big Data toolset for identifying clinically actionable genetic variants using high-throughput NGS data and EHRs.

  14. A Big Data Platform for Storing, Accessing, Mining and Learning Geospatial Data

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Bambacus, M.; Duffy, D.; Little, M. M.

    2017-12-01

    Big Data is becoming a norm in geoscience domains. A platform that is capable to effiently manage, access, analyze, mine, and learn the big data for new information and knowledge is desired. This paper introduces our latest effort on developing such a platform based on our past years' experiences on cloud and high performance computing, analyzing big data, comparing big data containers, and mining big geospatial data for new information. The platform includes four layers: a) the bottom layer includes a computing infrastructure with proper network, computer, and storage systems; b) the 2nd layer is a cloud computing layer based on virtualization to provide on demand computing services for upper layers; c) the 3rd layer is big data containers that are customized for dealing with different types of data and functionalities; d) the 4th layer is a big data presentation layer that supports the effient management, access, analyses, mining and learning of big geospatial data.

  15. The New Improved Big6 Workshop Handbook. Professional Growth Series.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.; Berkowitz, Robert E.

    This handbook is intended to help classroom teachers, teacher-librarians, technology teachers, administrators, parents, community members, and students to learn about the Big6 Skills approach to information and technology skills, to use the Big6 process in their own activities, and to implement a Big6 information and technology skills program. The…

  16. Stock price dynamics and option valuations under volatility feedback effect

    NASA Astrophysics Data System (ADS)

    Kanniainen, Juho; Piché, Robert

    2013-02-01

    According to the volatility feedback effect, an unexpected increase in squared volatility leads to an immediate decline in the price-dividend ratio. In this paper, we consider the properties of stock price dynamics and option valuations under the volatility feedback effect by modeling the joint dynamics of stock price, dividends, and volatility in continuous time. Most importantly, our model predicts the negative effect of an increase in squared return volatility on the value of deep-in-the-money call options and, furthermore, attempts to explain the volatility puzzle. We theoretically demonstrate a mechanism by which the market price of diffusion return risk, or an equity risk-premium, affects option prices and empirically illustrate how to identify that mechanism using forward-looking information on option contracts. Our theoretical and empirical results support the relevance of the volatility feedback effect. Overall, the results indicate that the prevailing practice of ignoring the time-varying dividend yield in option pricing can lead to oversimplification of the stock market dynamics.

  17. A Hierarchical Visualization Analysis Model of Power Big Data

    NASA Astrophysics Data System (ADS)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  18. An artificial intelligence approach fit for tRNA gene studies in the era of big sequence data.

    PubMed

    Iwasaki, Yuki; Abe, Takashi; Wada, Kennosuke; Wada, Yoshiko; Ikemura, Toshimichi

    2017-09-12

    Unsupervised data mining capable of extracting a wide range of knowledge from big data without prior knowledge or particular models is a timely application in the era of big sequence data accumulation in genome research. By handling oligonucleotide compositions as high-dimensional data, we have previously modified the conventional self-organizing map (SOM) for genome informatics and established BLSOM, which can analyze more than ten million sequences simultaneously. Here, we develop BLSOM specialized for tRNA genes (tDNAs) that can cluster (self-organize) more than one million microbial tDNAs according to their cognate amino acid solely depending on tetra- and pentanucleotide compositions. This unsupervised clustering can reveal combinatorial oligonucleotide motifs that are responsible for the amino acid-dependent clustering, as well as other functionally and structurally important consensus motifs, which have been evolutionarily conserved. BLSOM is also useful for identifying tDNAs as phylogenetic markers for special phylotypes. When we constructed BLSOM with 'species-unknown' tDNAs from metagenomic sequences plus 'species-known' microbial tDNAs, a large portion of metagenomic tDNAs self-organized with species-known tDNAs, yielding information on microbial communities in environmental samples. BLSOM can also enhance accuracy in the tDNA database obtained from big sequence data. This unsupervised data mining should become important for studying numerous functionally unclear RNAs obtained from a wide range of organisms.

  19. Big Data: More than Just Big and More than Just Data.

    PubMed

    Spencer, Gregory A

    2017-01-01

    According to an report, 90 percent of the data in the world today were created in the past two years. This statistic is not surprising given the explosion of mobile phones and other devices that generate data, the Internet of Things (e.g., smart refrigerators), and metadata (data about data). While it might be a stretch to figure out how a healthcare organization can use data generated from an ice maker, data from a plethora of rich and useful sources, when combined with an organization's own data, can produce improved results. How can healthcare organizations leverage these rich and diverse data sources to improve patients' health and make their businesses more competitive? The authors of the two feature articles in this issue of Frontiers provide tangible examples of how their organizations are using big data to meaningfully improve healthcare. Sentara Healthcare and Carolinas HealthCare System both use big data in creative ways that differ because of different business situations, yet are also similar in certain respects.

  20. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    PubMed

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  1. Big data, smart homes and ambient assisted living.

    PubMed

    Vimarlund, V; Wass, S

    2014-08-15

    To discuss how current research in the area of smart homes and ambient assisted living will be influenced by the use of big data. A scoping review of literature published in scientific journals and conference proceedings was performed, focusing on smart homes, ambient assisted living and big data over the years 2011-2014. The health and social care market has lagged behind other markets when it comes to the introduction of innovative IT solutions and the market faces a number of challenges as the use of big data will increase. First, there is a need for a sustainable and trustful information chain where the needed information can be transferred from all producers to all consumers in a structured way. Second, there is a need for big data strategies and policies to manage the new situation where information is handled and transferred independently of the place of the expertise. Finally, there is a possibility to develop new and innovative business models for a market that supports cloud computing, social media, crowdsourcing etc. The interdisciplinary area of big data, smart homes and ambient assisted living is no longer only of interest for IT developers, it is also of interest for decision makers as customers make more informed choices among today's services. In the future it will be of importance to make information usable for managers and improve decision making, tailor smart home services based on big data, develop new business models, increase competition and identify policies to ensure privacy, security and liability.

  2. Big Data, Smart Homes and Ambient Assisted Living

    PubMed Central

    Wass, S.

    2014-01-01

    Summary Objectives To discuss how current research in the area of smart homes and ambient assisted living will be influenced by the use of big data. Methods A scoping review of literature published in scientific journals and conference proceedings was performed, focusing on smart homes, ambient assisted living and big data over the years 2011-2014. Results The health and social care market has lagged behind other markets when it comes to the introduction of innovative IT solutions and the market faces a number of challenges as the use of big data will increase. First, there is a need for a sustainable and trustful information chain where the needed information can be transferred from all producers to all consumers in a structured way. Second, there is a need for big data strategies and policies to manage the new situation where information is handled and transferred independently of the place of the expertise. Finally, there is a possibility to develop new and innovative business models for a market that supports cloud computing, social media, crowdsourcing etc. Conclusions The interdisciplinary area of big data, smart homes and ambient assisted living is no longer only of interest for IT developers, it is also of interest for decision makers as customers make more informed choices among today’s services. In the future it will be of importance to make information usable for managers and improve decision making, tailor smart home services based on big data, develop new business models, increase competition and identify policies to ensure privacy, security and liability. PMID:25123734

  3. Classical and quantum Big Brake cosmology for scalar field and tachyonic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamenshchik, A. Yu.; Manti, S.

    We study a relation between the cosmological singularities in classical and quantum theory, comparing the classical and quantum dynamics in some models possessing the Big Brake singularity - the model based on a scalar field and two models based on a tachyon-pseudo-tachyon field . It is shown that the effect of quantum avoidance is absent for the soft singularities of the Big Brake type while it is present for the Big Bang and Big Crunch singularities. Thus, there is some kind of a classical - quantum correspondence, because soft singularities are traversable in classical cosmology, while the strong Big Bangmore » and Big Crunch singularities are not traversable.« less

  4. Big trees in the southern forest inventory

    Treesearch

    Christopher M. Oswalt; Sonja N. Oswalt; Thomas J. Brandeis

    2010-01-01

    Big trees fascinate people worldwide, inspiring respect, awe, and oftentimes, even controversy. This paper uses a modified version of American Forests’ Big Trees Measuring Guide point system (May 1990) to rank trees sampled between January of 1998 and September of 2007 on over 89,000 plots by the Forest Service, U.S. Department of Agriculture, Forest Inventory and...

  5. A genetic algorithm-based job scheduling model for big data analytics.

    PubMed

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  6. ELM Meets Urban Big Data Analysis: Case Studies

    PubMed Central

    Chen, Huajun; Chen, Jiaoyan

    2016-01-01

    In the latest years, the rapid progress of urban computing has engendered big issues, which creates both opportunities and challenges. The heterogeneous and big volume of data and the big difference between physical and virtual worlds have resulted in lots of problems in quickly solving practical problems in urban computing. In this paper, we propose a general application framework of ELM for urban computing. We present several real case studies of the framework like smog-related health hazard prediction and optimal retain store placement. Experiments involving urban data in China show the efficiency, accuracy, and flexibility of our proposed framework. PMID:27656203

  7. Big Data in Psychology: Introduction to Special Issue

    PubMed Central

    Harlow, Lisa L.; Oswald, Frederick L.

    2016-01-01

    The introduction to this special issue on psychological research involving big data summarizes the highlights of 10 articles that address a number of important and inspiring perspectives, issues, and applications. Four common themes that emerge in the articles with respect to psychological research conducted in the area of big data are mentioned, including: 1. The benefits of collaboration across disciplines, such as those in the social sciences, applied statistics, and computer science. Doing so assists in grounding big data research in sound theory and practice, as well as in affording effective data retrieval and analysis. 2. Availability of large datasets on Facebook, Twitter, and other social media sites that provide a psychological window into the attitudes and behaviors of a broad spectrum of the population. 3. Identifying, addressing, and being sensitive to ethical considerations when analyzing large datasets gained from public or private sources. 4. The unavoidable necessity of validating predictive models in big data by applying a model developed on one dataset to a separate set of data or hold-out sample. Translational abstracts that summarize the articles in very clear and understandable terms are included in Appendix A, and a glossary of terms relevant to big data research discussed in the articles is presented in Appendix B. PMID:27918177

  8. Beyond simple charts: Design of visualizations for big health data

    PubMed Central

    Ola, Oluwakemi; Sedig, Kamran

    2016-01-01

    Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data’s utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data. PMID:28210416

  9. Beyond simple charts: Design of visualizations for big health data.

    PubMed

    Ola, Oluwakemi; Sedig, Kamran

    2016-01-01

    Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data's utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data.

  10. BIG: a large-scale data integration tool for renal physiology.

    PubMed

    Zhao, Yue; Yang, Chin-Rang; Raghuram, Viswanathan; Parulekar, Jaya; Knepper, Mark A

    2016-10-01

    Due to recent advances in high-throughput techniques, we and others have generated multiple proteomic and transcriptomic databases to describe and quantify gene expression, protein abundance, or cellular signaling on the scale of the whole genome/proteome in kidney cells. The existence of so much data from diverse sources raises the following question: "How can researchers find information efficiently for a given gene product over all of these data sets without searching each data set individually?" This is the type of problem that has motivated the "Big-Data" revolution in Data Science, which has driven progress in fields such as marketing. Here we present an online Big-Data tool called BIG (Biological Information Gatherer) that allows users to submit a single online query to obtain all relevant information from all indexed databases. BIG is accessible at http://big.nhlbi.nih.gov/.

  11. BigData as a Driver for Capacity Building in Astrophysics

    NASA Astrophysics Data System (ADS)

    Shastri, Prajval

    2015-08-01

    Exciting public interest in astrophysics acquires new significance in the era of Big Data. Since Big Data involves advanced technologies of both software and hardware, astrophysics with Big Data has the potential to inspire young minds with diverse inclinations - i.e., not just those attracted to physics but also those pursuing engineering careers. Digital technologies have become steadily cheaper, which can enable expansion of the Big Data user pool considerably, especially to communities that may not yet be in the astrophysics mainstream, but have high potential because of access to thesetechnologies. For success, however, capacity building at the early stages becomes key. The development of on-line pedagogical resources in astrophysics, astrostatistics, data-mining and data visualisation that are designed around the big facilities of the future can be an important effort that drives such capacity building, especially if facilitated by the IAU.

  12. The dominance of big pharma: power.

    PubMed

    Edgar, Andrew

    2013-05-01

    The purpose of this paper is to provide a normative model for the assessment of the exercise of power by Big Pharma. By drawing on the work of Steven Lukes, it will be argued that while Big Pharma is overtly highly regulated, so that its power is indeed restricted in the interests of patients and the general public, the industry is still able to exercise what Lukes describes as a third dimension of power. This entails concealing the conflicts of interest and grievances that Big Pharma may have with the health care system, physicians and patients, crucially through rhetorical engagements with Patient Advocacy Groups that seek to shape public opinion, and also by marginalising certain groups, excluding them from debates over health care resource allocation. Three issues will be examined: the construction of a conception of the patient as expert patient or consumer; the phenomenon of disease mongering; the suppression or distortion of debates over resource allocation.

  13. Big data for space situation awareness

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Pugh, Mark; Sheaff, Carolyn; Raquepas, Joe; Rocci, Peter

    2017-05-01

    Recent advances in big data (BD) have focused research on the volume, velocity, veracity, and variety of data. These developments enable new opportunities in information management, visualization, machine learning, and information fusion that have potential implications for space situational awareness (SSA). In this paper, we explore some of these BD trends as applicable for SSA towards enhancing the space operating picture. The BD developments could increase in measures of performance and measures of effectiveness for future management of the space environment. The global SSA influences include resident space object (RSO) tracking and characterization, cyber protection, remote sensing, and information management. The local satellite awareness can benefit from space weather, health monitoring, and spectrum management for situation space understanding. One area in big data of importance to SSA is value - getting the correct data/information at the right time, which corresponds to SSA visualization for the operator. A SSA big data example is presented supporting disaster relief for space situation awareness, assessment, and understanding.

  14. Big Data Analytics for Genomic Medicine

    PubMed Central

    He, Karen Y.; Ge, Dongliang; He, Max M.

    2017-01-01

    Genomic medicine attempts to build individualized strategies for diagnostic or therapeutic decision-making by utilizing patients’ genomic information. Big Data analytics uncovers hidden patterns, unknown correlations, and other insights through examining large-scale various data sets. While integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a Big Data infrastructure exhibit challenges, they also provide a feasible opportunity to develop an efficient and effective approach to identify clinically actionable genetic variants for individualized diagnosis and therapy. In this paper, we review the challenges of manipulating large-scale next-generation sequencing (NGS) data and diverse clinical data derived from the EHRs for genomic medicine. We introduce possible solutions for different challenges in manipulating, managing, and analyzing genomic and clinical data to implement genomic medicine. Additionally, we also present a practical Big Data toolset for identifying clinically actionable genetic variants using high-throughput NGS data and EHRs. PMID:28212287

  15. Big Sky and Greenhorn Elemental Comparison

    NASA Image and Video Library

    2015-12-17

    NASA's Curiosity Mars rover examined both the "Greenhorn" and "Big Sky" targets with the rover's Alpha Particle X-ray Spectrometer (APXS) instrument. Greenhorn is located within an altered fracture zone and has an elevated concentration of silica (about 60 percent by weight). Big Sky is the unaltered counterpart for comparison. The bar plot on the left shows scaled concentrations as analyzed by Curiosity's APXS. The bar plot on the right shows what the Big Sky composition would look like if silica (SiO2) and calcium-sulfate (both abumdant in Greenhorn) were added. The similarity in the resulting composition suggests that much of the chemistry of Greenhorn could be explained by the addition of silica. Ongoing research aims to distinguish between that possible explanation for silicon enrichment and an alternative of silicon being left behind when some other elements were removed by acid weathering. http://photojournal.jpl.nasa.gov/catalog/PIA20275

  16. Adapting bioinformatics curricula for big data

    PubMed Central

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  17. The caBIG Terminology Review Process

    PubMed Central

    Cimino, James J.; Hayamizu, Terry F.; Bodenreider, Olivier; Davis, Brian; Stafford, Grace A.; Ringwald, Martin

    2009-01-01

    The National Cancer Institute (NCI) is developing an integrated biomedical informatics infrastructure, the cancer Biomedical Informatics Grid (caBIG®), to support collaboration within the cancer research community. A key part of the caBIG architecture is the establishment of terminology standards for representing data. In order to evaluate the suitability of existing controlled terminologies, the caBIG Vocabulary and Data Elements Workspace (VCDE WS) working group has developed a set of criteria that serve to assess a terminology's structure, content, documentation, and editorial process. This paper describes the evolution of these criteria and the results of their use in evaluating four standard terminologies: the Gene Ontology (GO), the NCI Thesaurus (NCIt), the Common Terminology for Adverse Events (known as CTCAE), and the laboratory portion of the Logical Objects, Identifiers, Names and Codes (LOINC). The resulting caBIG criteria are presented as a matrix that may be applicable to any terminology standardization effort. PMID:19154797

  18. [Big data approaches in psychiatry: examples in depression research].

    PubMed

    Bzdok, D; Karrer, T M; Habel, U; Schneider, F

    2017-11-29

    The exploration and therapy of depression is aggravated by heterogeneous etiological mechanisms and various comorbidities. With the growing trend towards big data in psychiatry, research and therapy can increasingly target the individual patient. This novel objective requires special methods of analysis. The possibilities and challenges of the application of big data approaches in depression are examined in closer detail. Examples are given to illustrate the possibilities of big data approaches in depression research. Modern machine learning methods are compared to traditional statistical methods in terms of their potential in applications to depression. Big data approaches are particularly suited to the analysis of detailed observational data, the prediction of single data points or several clinical variables and the identification of endophenotypes. A current challenge lies in the transfer of results into the clinical treatment of patients with depression. Big data approaches enable biological subtypes in depression to be identified and predictions in individual patients to be made. They have enormous potential for prevention, early diagnosis, treatment choice and prognosis of depression as well as for treatment development.

  19. A practical guide to big data research in psychology.

    PubMed

    Chen, Eric Evan; Wojcik, Sean P

    2016-12-01

    The massive volume of data that now covers a wide variety of human behaviors offers researchers in psychology an unprecedented opportunity to conduct innovative theory- and data-driven field research. This article is a practical guide to conducting big data research, covering data management, acquisition, processing, and analytics (including key supervised and unsupervised learning data mining methods). It is accompanied by walkthrough tutorials on data acquisition, text analysis with latent Dirichlet allocation topic modeling, and classification with support vector machines. Big data practitioners in academia, industry, and the community have built a comprehensive base of tools and knowledge that makes big data research accessible to researchers in a broad range of fields. However, big data research does require knowledge of software programming and a different analytical mindset. For those willing to acquire the requisite skills, innovative analyses of unexpected or previously untapped data sources can offer fresh ways to develop, test, and extend theories. When conducted with care and respect, big data research can become an essential complement to traditional research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Nursing Knowledge: Big Data Science-Implications for Nurse Leaders.

    PubMed

    Westra, Bonnie L; Clancy, Thomas R; Sensmeier, Joyce; Warren, Judith J; Weaver, Charlotte; Delaney, Connie W

    2015-01-01

    The integration of Big Data from electronic health records and other information systems within and across health care enterprises provides an opportunity to develop actionable predictive models that can increase the confidence of nursing leaders' decisions to improve patient outcomes and safety and control costs. As health care shifts to the community, mobile health applications add to the Big Data available. There is an evolving national action plan that includes nursing data in Big Data science, spearheaded by the University of Minnesota School of Nursing. For the past 3 years, diverse stakeholders from practice, industry, education, research, and professional organizations have collaborated through the "Nursing Knowledge: Big Data Science" conferences to create and act on recommendations for inclusion of nursing data, integrated with patient-generated, interprofessional, and contextual data. It is critical for nursing leaders to understand the value of Big Data science and the ways to standardize data and workflow processes to take advantage of newer cutting edge analytics to support analytic methods to control costs and improve patient quality and safety.

  1. From big data to deep insight in developmental science.

    PubMed

    Gilmore, Rick O

    2016-01-01

    The use of the term 'big data' has grown substantially over the past several decades and is now widespread. In this review, I ask what makes data 'big' and what implications the size, density, or complexity of datasets have for the science of human development. A survey of existing datasets illustrates how existing large, complex, multilevel, and multimeasure data can reveal the complexities of developmental processes. At the same time, significant technical, policy, ethics, transparency, cultural, and conceptual issues associated with the use of big data must be addressed. Most big developmental science data are currently hard to find and cumbersome to access, the field lacks a culture of data sharing, and there is no consensus about who owns or should control research data. But, these barriers are dissolving. Developmental researchers are finding new ways to collect, manage, store, share, and enable others to reuse data. This promises a future in which big data can lead to deeper insights about some of the most profound questions in behavioral science. © 2016 The Authors. WIREs Cognitive Science published by Wiley Periodicals, Inc.

  2. Translating Big Data into Smart Data for Veterinary Epidemiology

    PubMed Central

    VanderWaal, Kimberly; Morrison, Robert B.; Neuhauser, Claudia; Vilalta, Carles; Perez, Andres M.

    2017-01-01

    The increasing availability and complexity of data has led to new opportunities and challenges in veterinary epidemiology around how to translate abundant, diverse, and rapidly growing “big” data into meaningful insights for animal health. Big data analytics are used to understand health risks and minimize the impact of adverse animal health issues through identifying high-risk populations, combining data or processes acting at multiple scales through epidemiological modeling approaches, and harnessing high velocity data to monitor animal health trends and detect emerging health threats. The advent of big data requires the incorporation of new skills into veterinary epidemiology training, including, for example, machine learning and coding, to prepare a new generation of scientists and practitioners to engage with big data. Establishing pipelines to analyze big data in near real-time is the next step for progressing from simply having “big data” to create “smart data,” with the objective of improving understanding of health risks, effectiveness of management and policy decisions, and ultimately preventing or at least minimizing the impact of adverse animal health issues. PMID:28770216

  3. Analysis of BigFoot HDC SymCap experiment N161205 on NIF

    NASA Astrophysics Data System (ADS)

    Dittrich, T. R.; Baker, K. L.; Thomas, C. A.; Berzak Hopkins, L. F.; Harte, J. A.; Zimmerman, G. B.; Woods, D. T.; Kritcher, A. L.; Ho, D. D.; Weber, C. R.; Kyrala, G.

    2017-10-01

    Analysis of NIF implosion experiment N161205 provides insight into both hohlraum and capsule performance. This experiment used an undoped High Density Carbon (HDC) ablator driven by a BigFoot x-ray profile in a Au hohlraum. Observations from this experiment include DT fusion yield, bang time, DSR, Tion and time-resolved x-ray emission images around bang time. These observations are all consistent with an x-ray spectrum having significantly reduced Au m-band emission that is present in a standard hohlraum simulation. Attempts to justify the observations using several other simulation modifications will be presented. This work was performed under the auspices of the Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.

  4. The BIG Score and Prediction of Mortality in Pediatric Blunt Trauma.

    PubMed

    Davis, Adrienne L; Wales, Paul W; Malik, Tahira; Stephens, Derek; Razik, Fathima; Schuh, Suzanne

    2015-09-01

    To examine the association between in-hospital mortality and the BIG (composed of the base deficit [B], International normalized ratio [I], Glasgow Coma Scale [G]) score measured on arrival to the emergency department in pediatric blunt trauma patients, adjusted for pre-hospital intubation, volume administration, and presence of hypotension and head injury. We also examined the association between the BIG score and mortality in patients requiring admission to the intensive care unit (ICU). A retrospective 2001-2012 trauma database review of patients with blunt trauma ≤ 17 years old with an Injury Severity score ≥ 12. Charts were reviewed for in-hospital mortality, components of the BIG score upon arrival to the emergency department, prehospital intubation, crystalloids ≥ 20 mL/kg, presence of hypotension, head injury, and disposition. 50/621 (8%) of the study patients died. Independent mortality predictors were the BIG score (OR 11, 95% CI 6-25), prior fluid bolus (OR 3, 95% CI 1.3-9), and prior intubation (OR 8, 95% CI 2-40). The area under the receiver operating characteristic curve was 0.95 (CI 0.93-0.98), with the optimal BIG cutoff of 16. With BIG <16, death rate was 3/496 (0.006, 95% CI 0.001-0.007) vs 47/125 (0.38, 95% CI 0.15-0.7) with BIG ≥ 16, (P < .0001). In patients requiring admission to the ICU, the BIG score remained predictive of mortality (OR 14.3, 95% CI 7.3-32, P < .0001). The BIG score accurately predicts mortality in a population of North American pediatric patients with blunt trauma independent of pre-hospital interventions, presence of head injury, and hypotension, and identifies children with a high probability of survival (BIG <16). The BIG score is also associated with mortality in pediatric patients with trauma requiring admission to the ICU. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Gene banks pay big dividends to agriculture, the environment, and human welfare

    Treesearch

    R. C. Johnson

    2008-01-01

    Nearly a century after the pioneering American apple tree purveyor Johnny Appleseed traveled from town to town planting nurseries in the Midwestern United States, Frans Nicholas Meijer left his Netherlands home to pursue a similar vocation as an "agricultural explorer" for the US Department of Agriculture. Over the course of his career, Meijer, who...

  6. Blocking in Success: Plan Ahead for Big Dividends from a New Schedule.

    ERIC Educational Resources Information Center

    Cooper, Sylvia L.

    1996-01-01

    Examines the benefits of flexible scheduling and the initial steps used in exploring this approach. Discusses the problem of loss of instructional time and the use of an independent research period as a solution. Presents results from an external assessment, ACT score data, and CTBS scores. (DDR)

  7. Big-City Rules

    ERIC Educational Resources Information Center

    Gordon, Dan

    2011-01-01

    When it comes to implementing innovative classroom technology programs, urban school districts face significant challenges stemming from their big-city status. These range from large bureaucracies, to scalability, to how to meet the needs of a more diverse group of students. Because of their size, urban districts tend to have greater distance…

  8. Big(ger) Data as Better Data in Open Distance Learning

    ERIC Educational Resources Information Center

    Prinsloo, Paul; Archer, Elizabeth; Barnes, Glen; Chetty, Yuraisha; van Zyl, Dion

    2015-01-01

    In the context of the hype, promise and perils of Big Data and the currently dominant paradigm of data-driven decision-making, it is important to critically engage with the potential of Big Data for higher education. We do not question the potential of Big Data, but we do raise a number of issues, and present a number of theses to be seriously…

  9. Big Data Analysis Framework for Healthcare and Social Sectors in Korea

    PubMed Central

    Song, Tae-Min

    2015-01-01

    Objectives We reviewed applications of big data analysis of healthcare and social services in developed countries, and subsequently devised a framework for such an analysis in Korea. Methods We reviewed the status of implementing big data analysis of health care and social services in developed countries, and strategies used by the Ministry of Health and Welfare of Korea (Government 3.0). We formulated a conceptual framework of big data in the healthcare and social service sectors at the national level. As a specific case, we designed a process and method of social big data analysis on suicide buzz. Results Developed countries (e.g., the United States, the UK, Singapore, Australia, and even OECD and EU) are emphasizing the potential of big data, and using it as a tool to solve their long-standing problems. Big data strategies for the healthcare and social service sectors were formulated based on an ICT-based policy of current government and the strategic goals of the Ministry of Health and Welfare. We suggest a framework of big data analysis in the healthcare and welfare service sectors separately and assigned them tentative names: 'health risk analysis center' and 'integrated social welfare service network'. A framework of social big data analysis is presented by applying it to the prevention and proactive detection of suicide in Korea. Conclusions There are some concerns with the utilization of big data in the healthcare and social welfare sectors. Thus, research on these issues must be conducted so that sophisticated and practical solutions can be reached. PMID:25705552

  10. Big data analysis framework for healthcare and social sectors in Korea.

    PubMed

    Song, Tae-Min; Ryu, Seewon

    2015-01-01

    We reviewed applications of big data analysis of healthcare and social services in developed countries, and subsequently devised a framework for such an analysis in Korea. We reviewed the status of implementing big data analysis of health care and social services in developed countries, and strategies used by the Ministry of Health and Welfare of Korea (Government 3.0). We formulated a conceptual framework of big data in the healthcare and social service sectors at the national level. As a specific case, we designed a process and method of social big data analysis on suicide buzz. Developed countries (e.g., the United States, the UK, Singapore, Australia, and even OECD and EU) are emphasizing the potential of big data, and using it as a tool to solve their long-standing problems. Big data strategies for the healthcare and social service sectors were formulated based on an ICT-based policy of current government and the strategic goals of the Ministry of Health and Welfare. We suggest a framework of big data analysis in the healthcare and welfare service sectors separately and assigned them tentative names: 'health risk analysis center' and 'integrated social welfare service network'. A framework of social big data analysis is presented by applying it to the prevention and proactive detection of suicide in Korea. There are some concerns with the utilization of big data in the healthcare and social welfare sectors. Thus, research on these issues must be conducted so that sophisticated and practical solutions can be reached.

  11. Female "Big Fish" Swimming against the Tide: The "Big-Fish-Little-Pond Effect" and Gender-Ratio in Special Gifted Classes

    ERIC Educational Resources Information Center

    Preckel, Franzis; Zeidner, Moshe; Goetz, Thomas; Schleyer, Esther Jane

    2008-01-01

    This study takes a second look at the "big-fish-little-pond effect" (BFLPE) on a national sample of 769 gifted Israeli students (32% female) previously investigated by Zeidner and Schleyer (Zeidner, M., & Schleyer, E. J., (1999a). "The big-fish-little-pond effect for academic self-concept, test anxiety, and school grades in…

  12. Privacy Challenges of Genomic Big Data.

    PubMed

    Shen, Hong; Ma, Jian

    2017-01-01

    With the rapid advancement of high-throughput DNA sequencing technologies, genomics has become a big data discipline where large-scale genetic information of human individuals can be obtained efficiently with low cost. However, such massive amount of personal genomic data creates tremendous challenge for privacy, especially given the emergence of direct-to-consumer (DTC) industry that provides genetic testing services. Here we review the recent development in genomic big data and its implications on privacy. We also discuss the current dilemmas and future challenges of genomic privacy.

  13. Big two personality and big three mate preferences: similarity attracts, but country-level mate preferences crucially matter.

    PubMed

    Gebauer, Jochen E; Leary, Mark R; Neberich, Wiebke

    2012-12-01

    People differ regarding their "Big Three" mate preferences of attractiveness, status, and interpersonal warmth. We explain these differences by linking them to the "Big Two" personality dimensions of agency/competence and communion/warmth. The similarity-attracts hypothesis predicts that people high in agency prefer attractiveness and status in mates, whereas those high in communion prefer warmth. However, these effects may be moderated by agentics' tendency to contrast from ambient culture, and communals' tendency to assimilate to ambient culture. Attending to such agentic-cultural-contrast and communal-cultural-assimilation crucially qualifies the similarity-attracts hypothesis. Data from 187,957 online-daters across 11 countries supported this model for each of the Big Three. For example, agentics-more so than communals-preferred attractiveness, but this similarity-attracts effect virtually vanished in attractiveness-valuing countries. This research may reconcile inconsistencies in the literature while utilizing nonhypothetical and consequential mate preference reports that, for the first time, were directly linked to mate choice.

  14. Photosynthesis, Productivity, and Yield of Maize Are Not Affected by Open-Air Elevation of CO2 Concentration in the Absence of Drought1[OA

    PubMed Central

    Leakey, Andrew D.B.; Uribelarrea, Martin; Ainsworth, Elizabeth A.; Naidu, Shawna L.; Rogers, Alistair; Ort, Donald R.; Long, Stephen P.

    2006-01-01

    While increasing temperatures and altered soil moisture arising from climate change in the next 50 years are projected to decrease yield of food crops, elevated CO2 concentration ([CO2]) is predicted to enhance yield and offset these detrimental factors. However, C4 photosynthesis is usually saturated at current [CO2] and theoretically should not be stimulated under elevated [CO2]. Nevertheless, some controlled environment studies have reported direct stimulation of C4 photosynthesis and productivity, as well as physiological acclimation, under elevated [CO2]. To test if these effects occur in the open air and within the Corn Belt, maize (Zea mays) was grown in ambient [CO2] (376 μmol mol−1) and elevated [CO2] (550 μmol mol−1) using Free-Air Concentration Enrichment technology. The 2004 season had ideal growing conditions in which the crop did not experience water stress. In the absence of water stress, growth at elevated [CO2] did not stimulate photosynthesis, biomass, or yield. Nor was there any CO2 effect on the activity of key photosynthetic enzymes, or metabolic markers of carbon and nitrogen status. Stomatal conductance was lower (−34%) and soil moisture was higher (up to 31%), consistent with reduced crop water use. The results provide unique field evidence that photosynthesis and production of maize may be unaffected by rising [CO2] in the absence of drought. This suggests that rising [CO2] may not provide the full dividend to North American maize production anticipated in projections of future global food supply. PMID:16407441

  15. Big Biology: Supersizing Science During the Emergence of the 21st Century

    PubMed Central

    Vermeulen, Niki

    2017-01-01

    Ist Biologie das jüngste Mitglied in der Familie von Big Science? Die vermehrte Zusammenarbeit in der biologischen Forschung wurde in der Folge des Human Genome Project zwar zum Gegenstand hitziger Diskussionen, aber Debatten und Reflexionen blieben meist im Polemischen verhaftet und zeigten eine begrenzte Wertschätzung für die Vielfalt und Erklärungskraft des Konzepts von Big Science. Zur gleichen Zeit haben Wissenschafts- und Technikforscher/innen in ihren Beschreibungen des Wandels der Forschungslandschaft die Verwendung des Begriffs Big Science gemieden. Dieser interdisziplinäre Artikel kombiniert eine begriffliche Analyse des Konzepts von Big Science mit unterschiedlichen Daten und Ideen aus einer Multimethodenuntersuchung mehrerer großer Forschungsprojekte in der Biologie. Ziel ist es, ein empirisch fundiertes, nuanciertes und analytisch nützliches Verständnis von Big Biology zu entwickeln und die normativen Debatten mit ihren einfachen Dichotomien und rhetorischen Positionen hinter sich zu lassen. Zwar kann das Konzept von Big Science als eine Mode in der Wissenschaftspolitik gesehen werden – inzwischen vielleicht sogar als ein altmodisches Konzept –, doch lautet meine innovative Argumentation, dass dessen analytische Verwendung unsere Aufmerksamkeit auf die Ausweitung der Zusammenarbeit in den Biowissenschaften lenkt. Die Analyse von Big Biology zeigt Unterschiede zu Big Physics und anderen Formen von Big Science, namentlich in den Mustern der Forschungsorganisation, der verwendeten Technologien und der gesellschaftlichen Zusammenhänge, in denen sie tätig ist. So können Reflexionen über Big Science, Big Biology und ihre Beziehungen zur Wissensproduktion die jüngsten Behauptungen über grundlegende Veränderungen in der Life Science-Forschung in einen historischen Kontext stellen. PMID:27215209

  16. Preliminary survey of the mayflies (Ephemeroptera) and caddisflies (Trichoptera) of Big Bend Ranch State Park and Big Bend National Park

    PubMed Central

    Baumgardner, David E.; Bowles, David E.

    2005-01-01

    The mayfly (Insecta: Ephemeroptera) and caddisfly (Insecta: Trichoptera) fauna of Big Bend National Park and Big Bend Ranch State Park are reported based upon numerous records. For mayflies, sixteen species representing four families and twelve genera are reported. By comparison, thirty-five species of caddisflies were collected during this study representing seventeen genera and nine families. Although the Rio Grande supports the greatest diversity of mayflies (n=9) and caddisflies (n=14), numerous spring-fed creeks throughout the park also support a wide variety of species. A general lack of data on the distribution and abundance of invertebrates in Big Bend National and State Park is discussed, along with the importance of continuing this type of research. PMID:17119610

  17. Fixing the Big Bang Theory's Lithium Problem

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2017-02-01

    How did our universe come into being? The Big Bang theory is a widely accepted and highly successful cosmological model of the universe, but it does introduce one puzzle: the cosmological lithium problem. Have scientists now found a solution?Too Much LithiumIn the Big Bang theory, the universe expanded rapidly from a very high-density and high-temperature state dominated by radiation. This theory has been validated again and again: the discovery of the cosmic microwave background radiation and observations of the large-scale structure of the universe both beautifully support the Big Bang theory, for instance. But one pesky trouble-spot remains: the abundance of lithium.The arrows show the primary reactions involved in Big Bang nucleosynthesis, and their flux ratios, as predicted by the authors model, are given on the right. Synthesizing primordial elements is complicated! [Hou et al. 2017]According to Big Bang nucleosynthesis theory, primordial nucleosynthesis ran wild during the first half hour of the universes existence. This produced most of the universes helium and small amounts of other light nuclides, including deuterium and lithium.But while predictions match the observed primordial deuterium and helium abundances, Big Bang nucleosynthesis theory overpredicts the abundance of primordial lithium by about a factor of three. This inconsistency is known as the cosmological lithium problem and attempts to resolve it using conventional astrophysics and nuclear physics over the past few decades have not been successful.In a recent publicationled by Suqing Hou (Institute of Modern Physics, Chinese Academy of Sciences) and advisorJianjun He (Institute of Modern Physics National Astronomical Observatories, Chinese Academy of Sciences), however, a team of scientists has proposed an elegant solution to this problem.Time and temperature evolution of the abundances of primordial light elements during the beginning of the universe. The authors model (dotted lines

  18. BIG: a large-scale data integration tool for renal physiology

    PubMed Central

    Zhao, Yue; Yang, Chin-Rang; Raghuram, Viswanathan; Parulekar, Jaya

    2016-01-01

    Due to recent advances in high-throughput techniques, we and others have generated multiple proteomic and transcriptomic databases to describe and quantify gene expression, protein abundance, or cellular signaling on the scale of the whole genome/proteome in kidney cells. The existence of so much data from diverse sources raises the following question: “How can researchers find information efficiently for a given gene product over all of these data sets without searching each data set individually?” This is the type of problem that has motivated the “Big-Data” revolution in Data Science, which has driven progress in fields such as marketing. Here we present an online Big-Data tool called BIG (Biological Information Gatherer) that allows users to submit a single online query to obtain all relevant information from all indexed databases. BIG is accessible at http://big.nhlbi.nih.gov/. PMID:27279488

  19. Maps showing estimated sediment yield from coastal landslides and active slope distribution along the Big Sur coast, Monterey and San Luis Obispo Counties, California

    USGS Publications Warehouse

    Hapke, Cheryl J.; Green, Krystal R.; Dallas, Kate

    2004-01-01

    The 1982-83 and 1997-98 El Ni?os brought very high precipitation to California?s central coast; this precipitation resulted in raised groundwater levels, coastal flooding, and destabilized slopes throughout the region. Large landslides in the coastal mountains of Big Sur in Monterey and San Luis Obispo Counties blocked sections of California State Route 1, closing the road for months at a time. Large landslides such as these occur frequently in the winter months along the Big Sur coast due to the steep topography and weak bedrock. A large landslide in 1983 resulted in the closure of Highway 1 for over a year to repair the road and stabilize the slope. Resulting work from the 1983 landslide cost over $7 million and generated 30 million cubic yards of debris from landslide removal and excavations to re-establish the highway along the Big Sur coast. Before establishment of the Monterey Bay National Marine Sanctuary (MBNMS) in 1992, typical road opening measures involved disposal of some landslide material and excess material generated from slope stabilization onto the seaward side of the highway. It is likely that some or most of this disposed material, either directly or indirectly through subsequent erosion, was eventually transported downslope into the ocean. In addition to the landslides that initiate above the road, natural slope failures sometimes occur on the steep slopes below the road and thus deliver material to the base of the coastal mountains where it is eroded and dispersed by waves and nearshore currents. Any coastal-slope landslide, generated through natural or anthropogenic processes, can result in sediment entering the nearshore zone. The waters offshore of the Big Sur coast are part of the MBNMS. Since it was established in 1992, landslide-disposal practices came under question for two reasons. The U.S. Code of Federal Regulations, Title 15, Section 922.132 prohibits discharging or depositing, from beyond the boundary of the Sanctuary, any material

  20. AmeriFlux US-Rms RCEW Mountain Big Sagebrush

    DOE Data Explorer

    Flerchinger, Gerald [USDA Agricultural Research Service

    2017-01-01

    This is the AmeriFlux version of the carbon flux data for the site US-Rms RCEW Mountain Big Sagebrush. Site Description - The site is located on the USDA-ARS's Reynolds Creek Experimental Watershed. It is dominated by mountain big sagebrush on land managed by USDI Bureau of Land Management.

  1. Assessing the impacts of climate change and tillage practices on stream flow, crop and sediment yields from the Mississippi River Basin

    Treesearch

    P.B. Parajuli; P. Jayakody; G.F. Sassenrath; Y. Ouyang

    2016-01-01

    This study evaluated climate change impacts on stream flow, crop and sediment yields from three differ-ent tillage systems (conventional, reduced 1–close to conservation, and reduced 2–close to no-till), in theBig Sunflower River Watershed (BSRW) in Mississippi. The Soil and Water Assessment Tool (SWAT) modelwas applied to the BSRW using observed stream flow and crop...

  2. Vertebrate richness and biogeography in the Big Thicket of Texas

    Treesearch

    Michael H MacRoberts; Barbara R. MacRoberts; D. Craig Rudolph

    2010-01-01

    The Big Thicket of Texas has been described as rich in species and a “crossroads:” a place where organisms from many different regions meet. We examine the species richness and regional affiliations of Big Thicket vertebrates. We found that the Big Thicket is neither exceptionally rich in vertebrates nor is it a crossroads for vertebrates. Its vertebrate fauna is...

  3. Quest for Value in Big Earth Data

    NASA Astrophysics Data System (ADS)

    Kuo, Kwo-Sen; Oloso, Amidu O.; Rilee, Mike L.; Doan, Khoa; Clune, Thomas L.; Yu, Hongfeng

    2017-04-01

    Among all the V's of Big Data challenges, such as Volume, Variety, Velocity, Veracity, etc., we believe Value is the ultimate determinant, because a system delivering better value has a competitive edge over others. Although it is not straightforward to assess the value of scientific endeavors, we believe the ratio of scientific productivity increase to investment is a reasonable measure. Our research in Big Data approaches to data-intensive analysis for Earth Science has yielded some insights, as well as evidences, as to how optimal value might be attained. The first insight is that we should avoid, as much as possible, moving data through connections with relatively low bandwidth. That is, we recognize that moving data is expensive, albeit inevitable. They must at least be moved from the storage device into computer main memory and then to CPU registers for computation. When data must be moved it is better to move them via relatively high-bandwidth connections and avoid low-bandwidth ones. For this reason, a technology that can best exploit data locality will have an advantage over others. Data locality is easy to achieve and exploit with only one dataset. With multiple datasets, data colocation becomes important in addition to data locality. However, the organization of datasets can only be co-located for certain types of analyses. It is impossible for them to be co-located for all analyses. Therefore, our second insight is that we need to co-locate the datasets for the most commonly used analyses. In Earth Science, we believe the most common analysis requirement is "spatiotemporal coincidence". For example, when we analyze precipitation systems, we often would like to know the environment conditions "where and when" (i.e. at the same location and time) there is precipitation. This "where and when" indicates the "spatiotemporal coincidence" requirement. Thus, an associated insight is that datasets need to be partitioned per the physical dimensions, i.e. space

  4. Creating value in health care through big data: opportunities and policy implications.

    PubMed

    Roski, Joachim; Bo-Linn, George W; Andrews, Timothy A

    2014-07-01

    Big data has the potential to create significant value in health care by improving outcomes while lowering costs. Big data's defining features include the ability to handle massive data volume and variety at high velocity. New, flexible, and easily expandable information technology (IT) infrastructure, including so-called data lakes and cloud data storage and management solutions, make big-data analytics possible. However, most health IT systems still rely on data warehouse structures. Without the right IT infrastructure, analytic tools, visualization approaches, work flows, and interfaces, the insights provided by big data are likely to be limited. Big data's success in creating value in the health care sector may require changes in current polices to balance the potential societal benefits of big-data approaches and the protection of patients' confidentiality. Other policy implications of using big data are that many current practices and policies related to data use, access, sharing, privacy, and stewardship need to be revised. Project HOPE—The People-to-People Health Foundation, Inc.

  5. Big Data, Big Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, Bill

    Data—lots of data—generated in seconds and piling up on the internet, streaming and stored in countless databases. Big data is important for commerce, society and our nation’s security. Yet the volume, velocity, variety and veracity of data is simply too great for any single analyst to make sense of alone. It requires advanced, data-intensive computing. Simply put, data-intensive computing is the use of sophisticated computers to sort through mounds of information and present analysts with solutions in the form of graphics, scenarios, formulas, new hypotheses and more. This scientific capability is foundational to PNNL’s energy, environment and security missions. Seniormore » Scientist and Division Director Bill Pike and his team are developing analytic tools that are used to solve important national challenges, including cyber systems defense, power grid control systems, intelligence analysis, climate change and scientific exploration.« less

  6. Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm

    NASA Astrophysics Data System (ADS)

    Hasançebi, O.; Kazemzadeh Azad, S.

    2014-01-01

    This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.

  7. How quantum is the big bang?

    PubMed

    Bojowald, Martin

    2008-06-06

    When quantum gravity is used to discuss the big bang singularity, the most important, though rarely addressed, question is what role genuine quantum degrees of freedom play. Here, complete effective equations are derived for isotropic models with an interacting scalar to all orders in the expansions involved. The resulting coupling terms show that quantum fluctuations do not affect the bounce much. Quantum correlations, however, do have an important role and could even eliminate the bounce. How quantum gravity regularizes the big bang depends crucially on properties of the quantum state.

  8. 76 FR 47141 - Big Horn County Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-04

    ....us , with the words Big Horn County RAC in the subject line. Facsimilies may be sent to 307-674-2668... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. [[Page 47142

  9. Big data science: A literature review of nursing research exemplars.

    PubMed

    Westra, Bonnie L; Sylvia, Martha; Weinfurter, Elizabeth F; Pruinelli, Lisiane; Park, Jung In; Dodd, Dianna; Keenan, Gail M; Senk, Patricia; Richesson, Rachel L; Baukner, Vicki; Cruz, Christopher; Gao, Grace; Whittenburg, Luann; Delaney, Connie W

    Big data and cutting-edge analytic methods in nursing research challenge nurse scientists to extend the data sources and analytic methods used for discovering and translating knowledge. The purpose of this study was to identify, analyze, and synthesize exemplars of big data nursing research applied to practice and disseminated in key nursing informatics, general biomedical informatics, and nursing research journals. A literature review of studies published between 2009 and 2015. There were 650 journal articles identified in 17 key nursing informatics, general biomedical informatics, and nursing research journals in the Web of Science database. After screening for inclusion and exclusion criteria, 17 studies published in 18 articles were identified as big data nursing research applied to practice. Nurses clearly are beginning to conduct big data research applied to practice. These studies represent multiple data sources and settings. Although numerous analytic methods were used, the fundamental issue remains to define the types of analyses consistent with big data analytic methods. There are needs to increase the visibility of big data and data science research conducted by nurse scientists, further examine the use of state of the science in data analytics, and continue to expand the availability and use of a variety of scientific, governmental, and industry data resources. A major implication of this literature review is whether nursing faculty and preparation of future scientists (PhD programs) are prepared for big data and data science. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Priming the Pump for Big Data at Sentara Healthcare.

    PubMed

    Kern, Howard P; Reagin, Michael J; Reese, Bertram S

    2016-01-01

    Today's healthcare organizations are facing significant demands with respect to managing population health, demonstrating value, and accepting risk for clinical outcomes across the continuum of care. The patient's environment outside the walls of the hospital and physician's office-and outside the electronic health record (EHR)-has a substantial impact on clinical care outcomes. The use of big data is key to understanding factors that affect the patient's health status and enhancing clinicians' ability to anticipate how the patient will respond to various therapies. Big data is essential to delivering sustainable, highquality, value-based healthcare, as well as to the success of new models of care such as clinically integrated networks (CINs) and accountable care organizations.Sentara Healthcare, based in Norfolk, Virginia, has been an early adopter of the technologies that have readied us for our big data journey: EHRs, telehealth-supported electronic intensive care units, and telehealth primary care support through MDLIVE. Although we would not say Sentara is at the cutting edge of the big data trend, it certainly is among the fast followers. Use of big data in healthcare is still at an early stage compared with other industries. Tools for data analytics are maturing, but traditional challenges such as heightened data security and limited human resources remain the primary focus for regional health systems to improve care and reduce costs. Sentara primarily makes actionable use of big data in our CIN, Sentara Quality Care Network, and at our health plan, Optima Health. Big data projects can be expensive, and justifying the expense organizationally has often been easier in times of crisis. We have developed an analytics strategic plan separate from but aligned with corporate system goals to ensure optimal investment and management of this essential asset.

  11. Exascale computing and big data

    DOE PAGES

    Reed, Daniel A.; Dongarra, Jack

    2015-06-25

    Scientific discovery and engineering innovation requires unifying traditionally separated high-performance computing and big data analytics. The tools and cultures of high-performance computing and big data analytics have diverged, to the detriment of both; unification is essential to address a spectrum of major research domains. The challenges of scale tax our ability to transmit data, compute complicated functions on that data, or store a substantial part of it; new approaches are required to meet these challenges. Finally, the international nature of science demands further development of advanced computer architectures and global standards for processing data, even as international competition complicates themore » openness of the scientific process.« less

  12. Exascale computing and big data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, Daniel A.; Dongarra, Jack

    Scientific discovery and engineering innovation requires unifying traditionally separated high-performance computing and big data analytics. The tools and cultures of high-performance computing and big data analytics have diverged, to the detriment of both; unification is essential to address a spectrum of major research domains. The challenges of scale tax our ability to transmit data, compute complicated functions on that data, or store a substantial part of it; new approaches are required to meet these challenges. Finally, the international nature of science demands further development of advanced computer architectures and global standards for processing data, even as international competition complicates themore » openness of the scientific process.« less

  13. Metadata mapping and reuse in caBIG.

    PubMed

    Kunz, Isaac; Lin, Ming-Chin; Frey, Lewis

    2009-02-05

    This paper proposes that interoperability across biomedical databases can be improved by utilizing a repository of Common Data Elements (CDEs), UML model class-attributes and simple lexical algorithms to facilitate the building domain models. This is examined in the context of an existing system, the National Cancer Institute (NCI)'s cancer Biomedical Informatics Grid (caBIG). The goal is to demonstrate the deployment of open source tools that can be used to effectively map models and enable the reuse of existing information objects and CDEs in the development of new models for translational research applications. This effort is intended to help developers reuse appropriate CDEs to enable interoperability of their systems when developing within the caBIG framework or other frameworks that use metadata repositories. The Dice (di-grams) and Dynamic algorithms are compared and both algorithms have similar performance matching UML model class-attributes to CDE class object-property pairs. With algorithms used, the baselines for automatically finding the matches are reasonable for the data models examined. It suggests that automatic mapping of UML models and CDEs is feasible within the caBIG framework and potentially any framework that uses a metadata repository. This work opens up the possibility of using mapping algorithms to reduce cost and time required to map local data models to a reference data model such as those used within caBIG. This effort contributes to facilitating the development of interoperable systems within caBIG as well as other metadata frameworks. Such efforts are critical to address the need to develop systems to handle enormous amounts of diverse data that can be leveraged from new biomedical methodologies.

  14. Standard big bang nucleosynthesis and primordial CNO abundances after Planck

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coc, Alain; Uzan, Jean-Philippe; Vangioni, Elisabeth, E-mail: coc@csnsm.in2p3.fr, E-mail: uzan@iap.fr, E-mail: vangioni@iap.fr

    Primordial or big bang nucleosynthesis (BBN) is one of the three historical strong evidences for the big bang model. The recent results by the Planck satellite mission have slightly changed the estimate of the baryonic density compared to the previous WMAP analysis. This article updates the BBN predictions for the light elements using the cosmological parameters determined by Planck, as well as an improvement of the nuclear network and new spectroscopic observations. There is a slight lowering of the primordial Li/H abundance, however, this lithium value still remains typically 3 times larger than its observed spectroscopic abundance in halo starsmore » of the Galaxy. According to the importance of this ''lithium problem{sup ,} we trace the small changes in its BBN calculated abundance following updates of the baryonic density, neutron lifetime and networks. In addition, for the first time, we provide confidence limits for the production of {sup 6}Li, {sup 9}Be, {sup 11}B and CNO, resulting from our extensive Monte Carlo calculation with our extended network. A specific focus is cast on CNO primordial production. Considering uncertainties on the nuclear rates around the CNO formation, we obtain CNO/H ≈ (5-30)×10{sup -15}. We further improve this estimate by analyzing correlations between yields and reaction rates and identified new influential reaction rates. These uncertain rates, if simultaneously varied could lead to a significant increase of CNO production: CNO/H∼10{sup -13}. This result is important for the study of population III star formation during the dark ages.« less

  15. Extending Big-Five Theory into Childhood: A Preliminary Investigation into the Relationship between Big-Five Personality Traits and Behavior Problems in Children.

    ERIC Educational Resources Information Center

    Ehrler, David J.; McGhee, Ron L.; Evans, J. Gary

    1999-01-01

    Investigation conducted to link Big-Five personality traits with behavior problems identified in childhood. Results show distinct patterns of behavior problems associated with various personality characteristics. Preliminary data indicate that identifying Big-Five personality trait patterns may be a useful dimension of assessment for understanding…

  16. Will Big Data Mean the End of Privacy?

    ERIC Educational Resources Information Center

    Pence, Harry E.

    2015-01-01

    Big Data is currently a hot topic in the field of technology, and many campuses are considering the addition of this topic into their undergraduate courses. Big Data tools are not just playing an increasingly important role in many commercial enterprises; they are also combining with new digital devices to dramatically change privacy. This article…

  17. Big Earth Data Initiative: Metadata Improvement: Case Studies

    NASA Technical Reports Server (NTRS)

    Kozimor, John; Habermann, Ted; Farley, John

    2016-01-01

    Big Earth Data Initiative (BEDI) The Big Earth Data Initiative (BEDI) invests in standardizing and optimizing the collection, management and delivery of U.S. Government's civil Earth observation data to improve discovery, access use, and understanding of Earth observations by the broader user community. Complete and consistent standard metadata helps address all three goals.

  18. 36 CFR 7.41 - Big Bend National Park.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Big Bend National Park. 7.41 Section 7.41 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.41 Big Bend National Park. (a) Fishing; closed waters...

  19. 36 CFR 7.41 - Big Bend National Park.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Big Bend National Park. 7.41 Section 7.41 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.41 Big Bend National Park. (a) Fishing; closed waters...

  20. 36 CFR 7.41 - Big Bend National Park.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Big Bend National Park. 7.41 Section 7.41 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.41 Big Bend National Park. (a) Fishing; closed waters...