Kamis, 30 November 2017

Sponsored Links

Google Flu Trends Failure Shows Drawbacks of Big Data | Time
src: timedotcom.files.wordpress.com

Google Flu Trends was a web service operated by Google. It provided estimates of influenza activity for more than 25 countries. By aggregating Google search queries, it attempted to make accurate predictions about flu activity. This project was first launched in 2008 by Google.org to help predict outbreaks of flu.

Google Flu Trends is now no longer publishing current estimates. Historical estimates are still available for download, and current data are offered for declared research purposes.


Video Google Flu Trends



History

The idea behind Google Flu Trends (GFT) is that, by monitoring millions of users' health tracking behaviors online, the large number of Google search queries gathered can be analyzed to reveal if there is the presence of flu-like illness in a population. Google Flu Trends compared these findings to a historic baseline level of influenza activity for its corresponding region and then reports the activity level as either minimal, low, moderate, high, or intense. These estimates have been generally consistent with conventional surveillance data collected by health agencies, both nationally and regionally.

Roni Zeiger helped develop Google Flu Trends.


Maps Google Flu Trends



Methods

Google Flu Trends was described as using the following method to gather information about flu trends.

First, a time series is computed for about 50 million common queries entered weekly within the United States from 2003 to 2008. A query's time series is computed separately for each state and normalized into a fraction by dividing the number of each query by the number of all queries in that state. By identifying the IP address associated with each search, the state in which this query was entered can be determined.

A linear model is used to compute the log-odds of Influenza-like illness (ILI) physician visit and the log-odds of ILI-related search query:

logit ( P ) = ? 0 + ? 1 × logit ( Q ) + ? {\displaystyle \operatorname {logit} (P)=\beta _{0}+\beta _{1}\times \operatorname {logit} (Q)+\epsilon }

P is the percentage of ILI physician visit and Q is the ILI-related query fraction computed in previous steps. ?0 is the intercept and ?1 is the coefficient, while ? is the error term.

Each of the 50 million queries is tested as Q to see if the result computed from a single query could match the actual history ILI data obtained from the U.S. Centers for Disease Control and Prevention (CDC). This process produces a list of top queries which gives the most accurate predictions of CDC ILI data when using the linear model. Then the top 45 queries are chosen because, when aggregated together, these queries fit the history data the most accurately. Using the sum of top 45 ILI-related queries, the linear model is fitted to the weekly ILI data between 2003 and 2007 so that the coefficient can be gained. Finally, the trained model is used to predict flu outbreak across all regions in the United States.

This algorithm has been subsequently revised by Google, partially in response to concerns about accuracy, and attempts to replicate its results have suggested that the algorithm developers "felt an unarticulated need to cloak the actual search terms identified".


Google Flu Trends Failure Shows Drawbacks of Big Data | Time
src: timedotcom.files.wordpress.com


Privacy concerns

Google Flu Trends tries to avoid privacy violations by only aggregating millions of anonymous search queries, without identifying individuals that performed the search. Their search log contains the IP address of the user, which could be used to trace back to the region where the search query is originally submitted. Google runs programs on computers to access and calculate the data, so no human is involved in the process. Google also implemented the policy to anonymize IP address in their search logs after 9 months.

However, Google Flu Trends has raised privacy concerns among some privacy groups. Electronic Privacy Information Center and Patient Privacy Rights sent a letter to Eric Schmidt in 2008, then the CEO of Google. They conceded that the use of user-generated data could support public health effort in significant ways, but expressed their worries that "user-specific investigations could be compelled, even over Google''s objection, by court order or Presidential authority".


Spatio â€
src: slideplayer.com


Impact

An initial motivation for GFT was that being able to identify disease activity early and respond quickly could reduce the impact of seasonal and pandemic influenza. One report was that Google Flu Trends was able to predict regional outbreaks of flu up to 10 days before they were reported by the CDC (Centers for Disease Control and Prevention).

In the 2009 flu pandemic Google Flu Trends tracked information about flu in the United States. In February 2010, the CDC identified influenza cases spiking in the mid-Atlantic region of the United States. However, Google's data of search queries about flu symptoms was able to show that same spike two weeks prior to the CDC report being released.

"The earlier the warning, the earlier prevention and control measures can be put in place, and this could prevent cases of influenza," said Dr. Lyn Finelli, lead for surveillance at the influenza division of the CDC. "From 5 to 20 percent of the nation's population contract the flu each year, leading to roughly 36,000 deaths on average."

Google Flu Trends is an example of collective intelligence that can be used to identify trends and calculate predictions. The data amassed by search engines is significantly insightful because the search queries represent people's unfiltered wants and needs. "This seems like a really clever way of using data that is created unintentionally by the users of Google to see patterns in the world that would otherwise be invisible," said Thomas W. Malone, a professor at the Sloan School of Management at MIT. "I think we are just scratching the surface of what's possible with collective intelligence."


Case Study: Google Flu - University of Michigan | Coursera
src: d3c33hcgiwev3.cloudfront.net


Accuracy

The initial Google paper stated that the Google Flu Trends predictions were 97% accurate comparing with CDC data. However subsequent reports asserted that Google Flu Trends' predictions have sometimes been very inaccurate--especially over the interval 2011-2013, when it consistently overestimated flu prevalence, and over one interval in the 2012-2013 flu season predicted twice as many doctors' visits as the CDC recorded.

One source of problems is that people making flu-related Google searches may know very little about how to diagnose flu; searches for flu or flu symptoms may well be researching disease symptoms that are similar to flu, but are not actually flu. Furthermore, analysis of search terms reportedly tracked by Google, such as "fever" and "cough", as well as effects of changes in their search algorithm over time, have raised concerns about the meaning of its predictions. In fall 2013, Google began attempting to compensate for increases in searches due to prominence of flu in the news, which was found to have previously skewed results. However, one analysis concluded that "by combining GFT and lagged CDC data, as well as dynamically recalibrating GFT, we can substantially improve on the performance of GFT or the CDC alone." A later study also demonstrates that Google search data can indeed be used to improve estimates, reducing the errors seen in a model using CDC data alone by up to 52.7 per cent.

By re-assessing the original GFT model, researchers uncovered that the model was aggregating queries about different health conditions, something that could lead to an over-prediction of ILI rates; in the same work, a series of more advanced linear and nonlinear better-performing approaches to ILI modelling have been proposed.


Google Flu Trends Overview - YouTube
src: i.ytimg.com


Related systems

Similar projects such as the flu-prediction project by the institute of Cognitive Science Osnabrück carry the basic idea forward, by combining social media data e.g. Twitter with CDC data, and structural models that infer the spatial and temporal spreading of the disease.


10 of the Most Beautiful and Inspirationally Designed Websites
src: 1stwebdesigner.com


References

Source of the article : Wikipedia

Comments
0 Comments