Profitable App Profiles for the App Store and Google Play Markets
Category > Data Analysis
Feb 25, 2022Analysis of Google Play and Apple store apps¶
The goal of this project is analyzing apps in Google Play and Apple store to understand the type of apps that attract more users. Based on the analysis, a new app will be developed for english speaking users, which will be available for free on the popular app stores. The developers will earn revenue through in-app ads. The more users that see and engage with adds, the better.
# Open the datasets and store them as lists
from csv import reader
applestore_dataset = open('AppleStore.csv', encoding = 'utf8')
googleplay_dataset = open('googleplaystore.csv', encoding = 'utf8')
apple_data = list(reader(applestore_dataset))
google_data = list(reader(googleplay_dataset))
# A function for exploring data from a dataset. The function
# assumes that the dataset parameter doesn't have a header row
def explore_data(dataset, start, end, rows_and_columns=False):
dataset_slice = dataset[start:end]
for row in dataset_slice:
print(row)
print('\n') # adds a new (empty) line after each row
if rows_and_columns:
print('Number of rows:', len(dataset))
print('Number of columns:', len(dataset[0]))
print("Printing first five rows of AppleStore dataset...")
explore_data(apple_data[1:], 0, 5, True)
print('\n')
print("Now printing first five rows of GooglePlay dataset...")
explore_data(google_data[1:], 0, 5, True)
At this point, it is clear that each element of this dataset is a string. So, datatype conversion may be necessary in future for performing any arithmatic operation on numbers. Also, the number of columns are different. Let's find out what the colums represent.
# print the header rows of each dataset
print(apple_data[0])
print('\n')
print(google_data[0])
Now it's fun time! Let's check if the data is clean i.e. we need to check if the dataset contains any wrong data or duplicate data. At first, let's try to clean the Google Play dataset. If we carefully read this discussion section, we will find that the row with index level 10473 (considering header row) has wrong data.
print(google_data[10473])
print('Length of the incorrect row: ', len(google_data[10473]))
Clearly, one data is missing from this row, as the length of header row is 13. After some deep observation, we find that the data for the 'Category' column is missing from the row. So, we delete this row with missing data.
del google_data[10473]
print(google_data[10473])
Now, it's time to check for duplicate data. The following check_duplicate()
function will check for dulicates in a dataset. If any duplicate is found, it puts the duplicate rows inside the list called duplicate_apps
.
def check_duplicates(dataset, appname_index, header = True):
unique_apps = []
duplicate_apps = []
start = 1 if header else 0
for app in dataset[start:]:
app_name = app[appname_index]
if app_name not in unique_apps:
unique_apps.append(app_name)
else:
duplicate_apps.append(app)
return duplicate_apps
duplicates = check_duplicates(google_data, 0)
print(duplicates[:5])
print('Total number of duplicates: ', len(duplicates))
Now, it's time for decision making!! We need to keep at least one of the duplicates for each app and discard the others. Instead of deleting duplicate rows randomly, we need to choose a selection criterion for this purpose. Let's take a look at the duplicate rows for some apps and try to find out if there is any difference.
some_duplicate_apps = duplicates[:3]
sample_app_names = []
for apps in some_duplicate_apps:
app_name = apps[0]
sample_app_names.append(app_name)
for app in google_data[1:]:
if app[0] in sample_app_names:
print(app)
If we notice the duplicate entries for 'Quick PDF Scanner' App, the main difference happen on the fourth position of each row, which corresponds to the number of views. The different numbers show the data was collected at different times. We can use this information to build a criterion for removing the duplicates. We won't remove rows randomly, but rather we'll keep the rows that have the highest number of reviews because the higher the number of reviews, the more reliable the ratings. To do that, we will:
- Create a dictionary where each key is a unique app name, and the value is the highest number of reviews of that app
- Use the dictionary to create a new data set, which will have only one entry per app (and we only select the apps with the highest number of reviews)
reviews_max = {}
for app in google_data[1:]:
name = app[0]
n_reviews = float(app[3])
if name in reviews_max and n_reviews > reviews_max[name]:
reviews_max[name] = n_reviews
if name not in reviews_max:
reviews_max[name] = n_reviews
print(len(reviews_max))
Now, let's use the reviews_max dictionary to remove the duplicates. For the duplicate cases, we'll only keep the entries with the highest number of reviews. In the code cell below:
- We start by initializing two empty lists, android_clean and already_added.
- We loop through the android data set, and for every iteration:
- We isolate the name of the app and the number of reviews.
- We add the current row (app) to the android_clean list, and the app name (name) to the already_added list if:
- The number of reviews of the current app matches the number of reviews of that app as described in the reviews_max dictionary; and
- The name of the app is not already in the already_added list. We need to add this supplementary condition to account for those cases where the highest number of reviews of a duplicate app is the same for more than one entry (for example, the Box app has three entries, and the number of reviews is the same). If we just check for reviews_max[name] == n_reviews, we'll still end up with duplicate entries for some apps.
# The cleaned android dataset doesn't contain any header
android_clean = []
already_added = []
for app in google_data[1:]:
name = app[0]
n_reviews = float(app[3])
if n_reviews == reviews_max[name] and name not in already_added:
android_clean.append(app)
already_added.append(name)
print(len(android_clean))
Now let's check if there is any wrong data in App Store dataset. After reading the discussion section in this link, we didn't notice any issue with missing data. However, to make sure, we check if the length of each row in the App Store dataset is the same. If the length of a row is different, it is probably missing some data. If such a row is found, we will remove them from the dataset.
def check_rows_with_missing_data(dataset, header_length, header = True):
start = 1 if header else 0
index = 0
row_index_with_missing_data = []
for app in dataset[start:]:
if len(app) < header_length:
row_index_with_missing_data.append(index)
index += 1
print("Total number of rows with incorrect data: ", \
len(row_index_with_missing_data))
return row_index_with_missing_data
incorrect_rows = check_rows_with_missing_data(apple_data, len(apple_data[0]))
So, there is no rows in the App store dataset that is missing data. Now, we will check for duplicate rows.
print(apple_data[0])
appname_index = 0
duplicate_appstore_app = check_duplicates(apple_data, appname_index)
print("Total number of dupicate apps: ", len(duplicate_appstore_app))
So, we don't need to do extra work for removing the duplicate data in App Store dataset. However, if we remember correctly, the target audiences for our apps are native english speaker. Therefore, we only want to analyze those apps that are directed toward English-speaking audience. If we explore the data long enough, we'll find that both data sets have apps with names that suggest they are not directed toward an English-speaking audience. We're not interested in keeping these apps, so we'll remove them. One way to go about this is to remove each app with a name containing a symbol that is not commonly used in English text.
The numbers corresponding to the characters we commonly use in an English text are all in the range 0 to 127, according to the ASCII (American Standard Code for Information Interchange) system. Based on this number range, we can build a function that detects whether a character belongs to the set of common English characters or not. If the number is equal to or less than 127, then the character belongs to the set of common English characters.
''' Checks if there's any character in the app name that doesn't belong to
the set of common English characters. If less than three nonenglish character
are found, returns True; otherwise, returns false.
'''
def is_english_name(app_name):
number_nonenglish_char = 0
for char in app_name:
if ord(char) > 127:
number_nonenglish_char += 1
return True if number_nonenglish_char <= 3 else False
# Test is_english_name() function
print(is_english_name("Instragram"))
print(is_english_name('爱奇艺PPS -《欢乐颂2》电视剧热播'))
print(is_english_name('Docs To Go™ Free Office Suite'))
print(is_english_name('Instachat 😜'))
Now let's use the is_english_name()
function to filter out non-english apps from both data sets.
english_android_apps = []
for app in android_clean:
app_name = app[0]
if is_english_name(app_name):
english_android_apps.append(app)
print("Total english apps in Google play store: ", len(english_android_apps))
english_ios_apps = []
for app in apple_data[1:]:
app_name = app[1]
if is_english_name(app_name):
english_ios_apps.append(app)
print("Total english apps in Apple store: ", len(english_ios_apps))
As we have already mentioned, we only build apps that are free to download and install, and our main source of revenue consists of in-app ads. Our data sets contain both free and non-free apps; we'll need to isolate only the free apps for our analysis.Isolating the free apps will be our last step in the data cleaning process.
print("Some samples of Google Playstore Dataset:\n")
print(google_data[0])
print("\n")
explore_data(english_android_apps, 0, 3)
print("\n")
print("The columns of Apple store Dataset:\n")
print(apple_data[0])
print("\n")
explore_data(english_ios_apps, 0, 3)
free_android_english = []
free_ios_english = []
for app in english_android_apps:
price = app[7]
if price == '0':
free_android_english.append(app)
for app in english_ios_apps:
price = app[4]
if price == '0.0':
free_ios_english.append(app)
print("The number of free english apps in Google Playstore: ", \
len(free_android_english))
print("The number of free ios apps in Apple Playstore: ", \
len(free_ios_english))
Now, we have successfully cleaned the data. Now it's time to do some real job! Our aim is to determine the kinds of apps that are likely to attract more users because our revenue is highly influenced by the number of people using our apps. Because our end goal is to add the app on both Google Play and the App Store, we need to find app profiles that are successful on both markets. Let's begin the analysis by getting a sense of what are the most common genres for each market. For this, we'll need to build frequency tables for a few columns in our data sets. At first, let's try to find out what the most common genres in each market are.
def freq_table(dataset, index):
''' The function returns a frequeny table as a dictionary for the column
specified by @parameter index. It assumes that the dataset has no header
row.
'''
freq_dict = {}
freq_as_percentage = {}
total_app = len(dataset)
for row in dataset:
key = row[index]
if key not in freq_dict:
freq_dict[key] = 1
else:
freq_dict[key] += 1
for key, val in freq_dict.items():
freq_as_percentage[key] = (val / total_app) * 100
return freq_dict, freq_as_percentage
def display_table(dataset, index):
table = freq_table(dataset, index)[1]
table_display = []
for key in table:
key_val_as_tuple = (table[key], key)
table_display.append(key_val_as_tuple)
table_sorted = sorted(table_display, reverse = True)
for entry in table_sorted:
print("{0} : {1:.2f}".format(entry[1], entry[0]))
display_table(free_ios_english, 11)
We can see that among the free English apps, more than a half (58.16%) are games. Entertainment apps are close to 8%, followed by photo and video apps, which are close to 5%. Only 3.66% of the apps are designed for education, followed by social networking apps which amount for 3.29% of the apps in our data set.
The general impression is that App Store (at least the part containing free English apps) is dominated by apps that are designed for fun (games, entertainment, photo and video, social networking, sports, music, etc.), while apps with practical purposes (education, shopping, utilities, productivity, lifestyle, etc.) are more rare. However, the fact that fun apps are the most numerous doesn't also imply that they also have the greatest number of users — the demand might not be the same as the offer.
Let's continue by examining the Genres and Category columns of the Google Play data set (two columns which seem to be related).
display_table(free_android_english, 9)
The landscape seems significantly different on Google Play: there are not that many apps designed for fun, and it seems that a good number of apps are designed for practical purposes (family, tools, business, lifestyle, productivity, etc.). However, if we investigate this further, we can see that the family category (which accounts for almost 19% of the apps) means mostly games for kids. Even so, practical apps seem to have a better representation on Google Play compared to App Store. This picture is also confirmed by the frequency table we see for the Genres column:
display_table(free_android_english, 1)
The difference between the Genres and the Category columns is not crystal clear, but one thing we can notice is that the Genres column is much more granular (it has more categories). We're only looking for the bigger picture at the moment, so we'll only work with the Category column moving forward.
Up to this point, we found that the App Store is dominated by apps designed for fun, while Google Play shows a more balanced landscape of both practical and for-fun apps. Now we'd like to get an idea about the kind of apps that have most users.
One way to find out what genres are the most popular (have the most users) is to calculate the average number of installs for each app genre. For the Google Play data set, we can find this information in the Installs column, but for the App Store data set this information is missing. As a workaround, we'll take the total number of user ratings as a proxy, which we can find in the rating_count_tot app.
Below, we calculate the average number of user ratings per app genre on the App Store:
genre_freq_table = freq_table(free_ios_english, 11)[0]
print(genre_freq_table)
map_genre_rating = {}
for app in free_ios_english:
genre = app[11]
user_rating = float(app[5])
if genre not in map_genre_rating:
map_genre_rating[genre] = user_rating
else:
map_genre_rating[genre] += user_rating
most_popular_genre = []
for genre, number in genre_freq_table.items():
total = map_genre_rating[genre]
average_rating_per_genre = total / number
most_popular_genre.append((genre, average_rating_per_genre))
sorted_popular_genre = sorted(most_popular_genre, key = lambda elem: elem[1],
reverse = True)
for elem in sorted_popular_genre:
genre_name = elem[0]
average_ratng = elem[1]
print('{}:{:.2f}'.format(genre_name, average_ratng))
On average, navigation apps have the highest number of user reviews, but this figure is heavily influenced by Waze and Google Maps, which have close to half a million user reviews together:
for app in free_ios_english:
if app[-5] == 'Navigation':
print(app[1], ':', app[5])
The same pattern applies to social networking apps, where the average number is heavily influenced by a few giants like Facebook, Pinterest, Skype, etc. Same applies to music apps, where a few big players like Pandora, Spotify, and Shazam heavily influence the average number.
Our aim is to find popular genres, but navigation, social networking or music apps might seem more popular than they really are. The average number of ratings seem to be skewed by very few apps which have hundreds of thousands of user ratings, while the other apps may struggle to get past the 10,000 threshold
Reference apps have 74,942 user ratings on average, but it's actually the Bible and Dictionary.com which skew up the average rating:
for app in free_ios_english:
if app[-5] == 'Reference':
print(app[1], ':', app[5])
However, this niche seems to show some potential. One thing we could do is take another popular book and turn it into an app where we could add different features besides the raw version of the book. This might include daily quotes from the book, an audio version of the book, quizzes about the book, etc. On top of that, we could also embed a dictionary within the app, so users don't need to exit our app to look up words in an external app.
This idea seems to fit well with the fact that the App Store is dominated by for-fun apps. This suggests the market might be a bit saturated with for-fun apps, which means a practical app might have more of a chance to stand out among the huge number of apps on the App Store.
Now let's analyze the Google Play market a bit. For the Google Play market, we actually have data about the number of installs, so we should be able to get a clearer picture about genre popularity. However, the install numbers don't seem precise enough — we can see that most values are open-ended (100+, 1,000+, 5,000+, etc.):
display_table(free_android_english, 5)
One problem with this data is that is not precise. For instance, we don't know whether an app with 100,000+ installs has 100,000 installs, 200,000, or 350,000. However, we don't need very precise data for our purposes — we only want to get an idea which app genres attract the most users, and we don't need perfect precision with respect to the number of users.
We're going to leave the numbers as they are, which means that we'll consider that an app with 100,000+ installs has 100,000 installs, and an app with 1,000,000+ installs has 1,000,000 installs, and so on.
To perform computations, however, we'll need to convert each install number to float — this means that we need to remove the commas and the plus characters, otherwise the conversion will fail and raise an error. We'll do this directly in the loop below, where we also compute the average number of installs for each genre (category).
category_freq_table = freq_table(free_android_english, 1)[0]
print(category_freq_table)
def clean_convert_install(install):
install = install.replace(",", "")
install = install.replace("+", "")
return float(install)
map_category_rating = {}
for app in free_android_english:
category = app[1]
install = clean_convert_install(app[5])
if category not in map_category_rating:
map_category_rating[category] = install
else:
map_category_rating[category] += install
most_popular_category = []
for category, number in category_freq_table.items():
total = map_category_rating[category]
average_rating_per_category = total / number
most_popular_category.append((category, average_rating_per_category))
sorted_popular_category = sorted(most_popular_category, key = lambda elem: elem[1],
reverse = True)
for elem in sorted_popular_category:
category_name = elem[0]
average_ratng = elem[1]
print('{}:{:.2f}'.format(category_name, average_ratng))
On average, communication apps have the most installs: 38,456,119. This number is heavily skewed up by a few apps that have over one billion installs (WhatsApp, Facebook Messenger, Skype, Google Chrome, Gmail, and Hangouts), and a few others with over 100 and 500 million installs:
for app in free_android_english:
if app[1] == 'COMMUNICATION' and (app[5] == '1,000,000,000+'
or app[5] == '500,000,000+'
or app[5] == '100,000,000+'):
print(app[0], ':', app[5])
We see the same pattern for the video players category, which is the runner-up with 24,727,872 installs. The market is dominated by apps like Youtube, Google Play Movies & TV, or MX Player. The pattern is repeated for social apps (where we have giants like Facebook, Instagram, Google+, etc.), photography apps (Google Photos and other popular photo editors), or productivity apps (Microsoft Word, Dropbox, Google Calendar, Evernote, etc.).
Again, the main concern is that these app genres might seem more popular than they really are. Moreover, these niches seem to be dominated by a few giants who are hard to compete against.
The game genre seems pretty popular, but previously we found out this part of the market seems a bit saturated, so we'd like to come up with a different app recommendation if possible.
The books and reference genre looks fairly popular as well, with an average number of installs of 8,767,811. It's interesting to explore this in more depth, since we found this genre has some potential to work well on the App Store, and our aim is to recommend an app genre that shows potential for being profitable on both the App Store and Google Play
for app in free_android_english:
if app[1] == 'BOOKS_AND_REFERENCE':
print(app[0], ':', app[5])
The book and reference genre includes a variety of apps: software for processing and reading ebooks, various collections of libraries, dictionaries, tutorials on programming or languages, etc. It seems there's still a small number of extremely popular apps that skew the average:
for app in free_android_english:
if app[1] == 'BOOKS_AND_REFERENCE' and (app[5] == '1,000,000,000+'
or app[5] == '500,000,000+'
or app[5] == '100,000,000+'):
print(app[0], ':', app[5])
However, it looks like there are only a few very popular apps, so this market still shows potential. Let's try to get some app ideas based on the kind of apps that are somewhere in the middle in terms of popularity (between 1,000,000 and 100,000,000 downloads):
for app in free_android_english:
if app[1] == 'BOOKS_AND_REFERENCE' and (app[5] == '1,000,000+'
or app[5] == '5,000,000+'
or app[5] == '10,000,000+'
or app[5] == '50,000,000+'):
print(app[0], ':', app[5])
This niche seems to be dominated by software for processing and reading ebooks, as well as various collections of libraries and dictionaries, so it's probably not a good idea to build similar apps since there'll be some significant competition.
We also notice there are quite a few apps built around the book Quran, which suggests that building an app around a popular book can be profitable. It seems that taking a popular book (perhaps a more recent book) and turning it into an app could be profitable for both the Google Play and the App Store markets.
However, it looks like the market is already full of libraries, so we need to add some special features besides the raw version of the book. This might include daily quotes from the book, an audio version of the book, quizzes on the book, a forum where people can discuss the book, etc.
Conclusions¶
In this project, we analyzed data about the App Store and Google Play mobile apps with the goal of recommending an app profile that can be profitable for both markets.
We concluded that taking a popular book (perhaps a more recent book) and turning it into an app could be profitable for both the Google Play and the App Store markets. The markets are already full of libraries, so we need to add some special features besides the raw version of the book. This might include daily quotes from the book, an audio version of the book, quizzes on the book, a forum where people can discuss the book, etc.