Thursday, February 22, 2018

The Gaming Industry and Big Data


The gaming industry is a booming gold mine for developers. From classic remote control games to innovative VR gaming. It enables individuals to immerse themselves into a different reality. Due to this, humans spend much time and especially much money on gaming. Some ask, what causes the appeal to play video games? The answer, Big Data.
            The gaming industry is an always changing animal, updating year after year. As a result, it is a goliath for revenue. In 2017, mobile apps alone produced 40.6 billion dollars internationally. The reason for this is the fact Big Data is now being understood more thoroughly. Developers are slowly discovering different aspects of Big Data that can be used in their video games. Not to mention, software companies, “like Microsoft, are seeing the value of data aggregation and acquiring gaming companies, like Minecraft for $2.5 billion,” (Rands). Microsoft is starting to see the potential in utilizing Big Data and for gaming. The question that lies is, what is Big Data doing exactly to attract a plethora of companies?
            First off, game design, and Im not talking about the graphics. No, game design is manufacturing a game and making it appealing as possible to the customer/gamer based off personal preferences. Developers strive to give their customers exactly what they want. Therefore, Big Data facilitates the process of altering certain aspects of a game. For example, a specific strategy game could be way too difficult or very glitchy. The developers then go into the system, look at the problem and change it. Big Data helps the developers examine the problems specifically and lets them tackle it head one. Creating an overall better product that customers want to buy.
            Due to Big Data, developers also focus on “Freemium” games. This means that there is no initial cost for the customer. The game is a free download. However, the customer makes in game purchases. This can be new outfits for characters or extra lives for the game. Through Big Data, developers can track customers purchasing patterns and see what makes them buy specific items and why. Developers are able to pinpoint the customers reasoning and enhance the game for the customer. It makes the customer want to play for a longer amount of time. Also, it helps developers try to make the best game possible. Like it was mentioned before.
            Big Data helps game developers focus on the customer and their play styles. Making their overall experience more general. However, there is a potential problem for some with Big Data and gaming. Big data tracks a player and their patterns very specifically. Almost to an uncomfortable amount. Who is not to say hackers would be able to manipulate the data in the future and keep track of the millions of accounts. If there is not a close eye kept on the evolution of this data, millions of identities could be a risk.


https://www.cio.com/article/3251172/big-data/how-big-data-is-disrupting-the-gaming-industry.html   

Wednesday, February 21, 2018

How Intelligent Data Impacts AI


-->
Vikas Shivpuriya’s article titled "AI Disruptions and the Power of Intelligence Data" discusses how artificial intelligence (AI) uses intelligent data to train AI machine systems in order to improve businesses and life in general.

Below is a Crash Course video that explains AI’s machine learning in detail:


One technique for AI machine learning that the article mentioned was both exciting and scary – specialization on inductive learning. First, an AI system utilizes generalization to create a basis of information about what things are from simple data sets. Then, specialization allows the algorithms to make decisions about more specific characteristics about these things from more varied data (Shivpuriya 4). This machine learning is extremely dependent on the training data that teaches machines to think like people do. Ultimately, this means the more data that a machine encounters, the smarter it gets and easier it is able recognize and understand things.
This technique taps into my fear of the unknown because I’m unsure of what this means in regard to data access for the future as robots and machines become more powerful and human-like. The data sets that AI systems learn from cannot be too narrow, or else the AI system will not be able to make decisions. Some of the data sets these machines learn from include social media and customer relationship management data. Therefore, if these machines need more data in terms of quantity and diversity to work more efficiently, then would there still be limitations from gaining access to private information? Furthermore, should there be limitations on what training data they learn from if their work is beneficial to society or businesses?
The importance of accurate and useful training data for effective machine learning is clear. Therefore, big data seems to be the perfect candidate due to its volume and variety; however, big data often comes with junk data and most likely must be managed first. The cost of sorting through this data to obtain meta data would be a concern for most companies, creating another obstacle for acquire sufficient training data.
Another question this article leads me to ask is: will big data be powerful enough to teach machines to make decisions and think similar to the way humans do, in what the video describes as strong AI? I ask this because to me, no matter the quantity and type of data that is fed to these machines, there are so many social nuances in human decision-making that the black and white computations of machines are not able to contextualize and deal with. Because any computer system is very rigid, I question how “intelligent” the data used in machine learning must be in order for AI to work properly.
Overall, I definitely see the value of AI once their systems are complete and functioning. However, I believe obtaining the training data that allows these systems to reach that point can face problems such as data privacy, complications with big data, and the inflexibility of computers.


Big Data in UK Doctor's Offices

Big Data Helps UK Save Millions in Healthcare Costs


               The UK National Health Service (NHS) is a nearly free healthcare system that services with few exceptions, everyone in the UK. This includes prescriptions, checkups, diagnostic examinations, and many surgical procedures. As such, it tries its best to save on costs wherever it can. Enter: Big Data. The Health Service started collecting, storing, and analyzing millions of medical records annually three years ago, and is now on target to reach one billion dollars in savings since. Analytics can now provide the Service information on drug effectiveness, cancer treatment success regimens, and over prescription of antibiotics, which is a growing global health concern.
               Personally, I dislike the idea of the government knowing my health, especially if they happen to be my provider as well. It’s incredible to amount of data they’re able to analyze of course, and I’m sure people are happy that the government is tightening its belt, but I think long term this could be an issue. For one, the provider of this Big Data is Oracle, which only a few years ago, was hacked by interns working for ERPScan Research. I can’t imagine the embarrassment of both the government and normal citizens should this data ever become compromised. If people are taking drugs for personality or mood disorders, could private companies discriminate against them in hiring practices? I also don’t think that simply analyzing this data is the end goal for the UK either, especially if they’re tracking for drug over prescription. Personally, I get sinus infections four or five times a year, and as such require four or five dosages of pretty strong antibiotics. Should the government decide a limit of antibiotics, or any other drugs, using this Big Data, people’s health would be put at risk.
               In addition, Big Data can’t help the NHS reach the same level of organization and success that private enterprises currently enjoy. Unlike with private companies, the NHS gains no real benefits from knowing market conditions or developing new products unlike private enterprises. The NHS does not currently develop drugs, so they have no way to address currently incurable or untreatable conditions. Since they’re obligated to provide medical care anyway, knowing “market conditions” does little to help doctors and nurses provide treatment as well. Finally, new regulations about these medical records being analyzed, shared, and released are coming into effect later this spring. This greatly hampers the spread of information within the NHS, which is a sensitivity that private enterprises have to deal with much less frequently.
               It’s great to see that the government is looking to cut costs through big data analytics, but perhaps it’s best implemented in areas like traffic, security, education, and agriculture, equally admirable fields without the invasion of personal privacy. 

Friday, January 26, 2018

Violation of privacy?

Great changes always face initial opposition. In most cases, it is years later when they are recognized for their contributions to society. The use of Data collected by big Web companies is an example, and a subject to the controversy of advantages vs. violation of privacy. When we focus on the disadvantages of companies invading our privacy and using our confidential information we may fail to recognize the technological advances that have been possible with the use of personal data. The question is when do the benefits obtained from the used of these data justify the release of private information. The article titled "How the Data That Internet Companies Collect Can Be Used for the Public Good" by the Harvard Business Review provides examples of useful applications and intends to persuade the reader of the advantages of data collection.

There are undeniable advantages in using consumer data to establish market trends, improve products, and even make future predictions. There are also multiple applications that simplify our daily lives such as the example provided in the article about urban planning and management of traffic achieved by Waze and their Connected Citizen Program in 60 cities which uses crowd-source data and applies it to easy urban congestion. Even more significant, data used for predictions can help save lives. An example is Flu Trends launched by google in 2008. By monitoring health related searches all over the world predictions on epidemics can be made before an outbreak. Another example is geotagging Tweet or Facebook messages to identify victims of natural dissasters and using that information to assist rescue efforts.
In my opinion there is an important distinction about the data that is collected that could help ease the controversy. I believe that if the data collected is not personally identifiable and cannot be tied to an individual, more people could be willing to participate in sharing information to
build useful applications. The next consideration is the role of regulation, such as audits from the Federal Trade Commission. The formulation of policy to protect individuals from abuse of personal information is vital.
Overall, I think of the advantages of open source data where information and results of years of research are open to public access. This could definitely help to propel more rapid advancements because there would be less duplicated efforts. We need to be open to the innovations that technological advances bring to our lives.

For more information please visit the link below where good incentives for sharing data can be seen in the areas of education and science. There is also advise on precautions, such as just telling the owner of the data how it will be used.
 

https://www.youtube.com/watch?v=HJbo-OAaJ1I