Langbahn Team – Weltmeisterschaft

Recommender system

A recommender system (RecSys), or a recommendation system (sometimes replacing system with terms such as platform, engine, or algorithm), is a subclass of information filtering system that provides suggestions for items that are most pertinent to a particular user.[1][2][3] Recommender systems are particularly useful when an individual needs to choose an item from a potentially overwhelming number of items that a service may offer.[1][4]

Typically, the suggestions refer to various decision-making processes, such as what product to purchase, what music to listen to, or what online news to read.[1] Recommender systems are used in a variety of areas, with commonly recognised examples taking the form of playlist generators for video and music services, product recommenders for online stores, or content recommenders for social media platforms and open web content recommenders.[5][6] These systems can operate using a single type of input, like music, or multiple inputs within and across platforms like news, books and search queries. There are also popular recommender systems for specific topics like restaurants and online dating. Recommender systems have also been developed to explore research articles and experts,[7] collaborators,[8] and financial services.[9]

A content discovery platform is an implemented software recommendation platform which uses recommender system tools. It utilizes user metadata in order to discover and recommend appropriate content, whilst reducing ongoing maintenance and development costs. A content discovery platform delivers personalized content to websites, mobile devices and set-top boxes. A large range of content discovery platforms currently exist for various forms of content ranging from news articles and academic journal articles[10] to television.[11] As operators compete to be the gateway to home entertainment, personalized television is a key service differentiator. Academic content discovery has recently become another area of interest, with several companies being established to help academic researchers keep up to date with relevant academic content and serendipitously discover new content.[10]

Overview

Recommender systems usually make use of either or both collaborative filtering and content-based filtering, as well as other systems such as knowledge-based systems. Collaborative filtering approaches build a model from a user's past behavior (items previously purchased or selected and/or numerical ratings given to those items) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in.[12] Content-based filtering approaches utilize a series of discrete, pre-tagged characteristics of an item in order to recommend additional items with similar properties.[13]

The differences between collaborative and content-based filtering can be demonstrated by comparing two early music recommender systems, Last.fm and Pandora Radio.

  • Last.fm creates a "station" of recommended songs by observing what bands and individual tracks the user has listened to on a regular basis and comparing those against the listening behavior of other users. Last.fm will play tracks that do not appear in the user's library, but are often played by other users with similar interests. As this approach leverages the behavior of users, it is an example of a collaborative filtering technique.[14]
  • Pandora uses the properties of a song or artist (a subset of the 400 attributes provided by the Music Genome Project) to seed a "station" that plays music with similar properties. User feedback is used to refine the station's results, deemphasizing certain attributes when a user "dislikes" a particular song and emphasizing other attributes when a user "likes" a song. This is an example of a content-based approach.

Each type of system has its strengths and weaknesses. In the above example, Last.fm requires a large amount of information about a user to make accurate recommendations. This is an example of the cold start problem, and is common in collaborative filtering systems.[15][16][17][18][19][20] Whereas Pandora needs very little information to start, it is far more limited in scope (for example, it can only make recommendations that are similar to the original seed).

Recommender systems are a useful alternative to search algorithms since they help users discover items they might not have found otherwise. Of note, recommender systems are often implemented using search engines indexing non-traditional data.

Recommender systems have been the focus of several granted patents,[21][22][23][24][25] and there are more than 50 software libraries[26] that support the development of recommender systems including LensKit,[27][28] RecBole,[29] ReChorus[30] and RecPack.[31]

History

Elaine Rich created the first recommender system in 1979, called Grundy.[32][33] She looked for a way to recommend users books they might like. Her idea was to create a system that asks users specific questions and classifies them into classes of preferences, or "stereotypes", depending on their answers. Depending on users' stereotype membership, they would then get recommendations for books they might like.

Another early recommender system, called a "digital bookshelf", was described in a 1990 technical report by Jussi Karlgren at Columbia University, [34] and implemented at scale and worked through in technical reports and publications from 1994 onwards by Jussi Karlgren, then at SICS,[35][36] and research groups led by Pattie Maes at MIT,[37] Will Hill at Bellcore,[38] and Paul Resnick, also at MIT,[39][4] whose work with GroupLens was awarded the 2010 ACM Software Systems Award.

Montaner provided the first overview of recommender systems from an intelligent agent perspective.[40] Adomavicius provided a new, alternate overview of recommender systems.[41] Herlocker provides an additional overview of evaluation techniques for recommender systems,[42] and Beel et al. discussed the problems of offline evaluations.[43] Beel et al. have also provided literature surveys on available research paper recommender systems and existing challenges.[44][45]

Approaches

Collaborative filtering

An example of collaborative filtering based on a rating system

One approach to the design of recommender systems that has wide use is collaborative filtering.[46] Collaborative filtering is based on the assumption that people who agreed in the past will agree in the future, and that they will like similar kinds of items as they liked in the past. The system generates recommendations using only information about rating profiles for different users or items. By locating peer users/items with a rating history similar to the current user or item, they generate recommendations using this neighborhood. Collaborative filtering methods are classified as memory-based and model-based. A well-known example of memory-based approaches is the user-based algorithm,[47] while that of model-based approaches is matrix factorization (recommender systems).[48]

A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an "understanding" of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, the k-nearest neighbor (k-NN) approach[49] and the Pearson Correlation as first implemented by Allen.[50]

When building a model from a user's behavior, a distinction is often made between explicit and implicit forms of data collection.

Examples of explicit data collection include the following:

  • Asking a user to rate an item on a sliding scale.
  • Asking a user to search.
  • Asking a user to rank a collection of items from favorite to least favorite.
  • Presenting two items to a user and asking him/her to choose the better one of them.
  • Asking a user to create a list of items that he/she likes (see Rocchio classification or other similar techniques).

Examples of implicit data collection include the following:

  • Observing the items that a user views in an online store.
  • Analyzing item/user viewing times.[51]
  • Keeping a record of the items that a user purchases online.
  • Obtaining a list of items that a user has listened to or watched on his/her computer.
  • Analyzing the user's social network and discovering similar likes and dislikes.

Collaborative filtering approaches often suffer from three problems: cold start, scalability, and sparsity.[52]

  • Cold start: For a new user or item, there is not enough data to make accurate recommendations. Note: one commonly implemented solution to this problem is the multi-armed bandit algorithm.[53][15][16][18][20]
  • Scalability: There are millions of users and products in many of the environments in which these systems make recommendations. Thus, a large amount of computation power is often necessary to calculate recommendations.
  • Sparsity: The number of items sold on major e-commerce sites is extremely large. The most active users will only have rated a small subset of the overall database. Thus, even the most popular items have very few ratings.

One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized by Amazon.com's recommender system.[54]

Many social networks originally used collaborative filtering to recommend new friends, groups, and other social connections by examining the network of connections between a user and their friends.[1] Collaborative filtering is still used as part of hybrid systems.

Content-based filtering

Another common approach when designing recommender systems is content-based filtering. Content-based filtering methods are based on a description of the item and a profile of the user's preferences.[55][56] These methods are best suited to situations where there is known data on an item (name, location, description, etc.), but not on the user. Content-based recommenders treat recommendation as a user-specific classification problem and learn a classifier for the user's likes and dislikes based on an item's features.

In this system, keywords are used to describe the items, and a user profile is built to indicate the type of item this user likes. In other words, these algorithms try to recommend items similar to those that a user liked in the past or is examining in the present. It does not rely on a user sign-in mechanism to generate this often temporary profile. In particular, various candidate items are compared with items previously rated by the user, and the best-matching items are recommended. This approach has its roots in information retrieval and information filtering research.

To create a user profile, the system mostly focuses on two types of information:

  1. A model of the user's preference.
  2. A history of the user's interaction with the recommender system.

Basically, these methods use an item profile (i.e., a set of discrete attributes and features) characterizing the item within the system. To abstract the features of the items in the system, an item presentation algorithm is applied. A widely used algorithm is the tf–idf representation (also called vector space representation).[57] The system creates a content-based profile of users based on a weighted vector of item features. The weights denote the importance of each feature to the user and can be computed from individually rated content vectors using a variety of techniques. Simple approaches use the average values of the rated item vector while other sophisticated methods use machine learning techniques such as Bayesian Classifiers, cluster analysis, decision trees, and artificial neural networks in order to estimate the probability that the user is going to like the item.[58]

A key issue with content-based filtering is whether the system can learn user preferences from users' actions regarding one content source and use them across other content types. When the system is limited to recommending content of the same type as the user is already using, the value from the recommendation system is significantly less than when other content types from other services can be recommended. For example, recommending news articles based on news browsing is useful. Still, it would be much more useful when music, videos, products, discussions, etc., from different services, can be recommended based on news browsing. To overcome this, most content-based recommender systems now use some form of the hybrid system.

Content-based recommender systems can also include opinion-based recommender systems. In some cases, users are allowed to leave text reviews or feedback on the items. These user-generated texts are implicit data for the recommender system because they are potentially rich resources of both feature/aspects of the item and users' evaluation/sentiment to the item. Features extracted from the user-generated reviews are improved metadata of items, because as they also reflect aspects of the item like metadata, extracted features are widely concerned by the users. Sentiments extracted from the reviews can be seen as users' rating scores on the corresponding features. Popular approaches of opinion-based recommender system utilize various techniques including text mining, information retrieval, sentiment analysis (see also Multimodal sentiment analysis) and deep learning.[59]

Hybrid recommendations approaches

Most recommender systems now use a hybrid approach, combining collaborative filtering, content-based filtering, and other approaches. There is no reason why several different techniques of the same type could not be hybridized. Hybrid approaches can be implemented in several ways: by making content-based and collaborative-based predictions separately and then combining them; by adding content-based capabilities to a collaborative-based approach (and vice versa); or by unifying the approaches into one model.[41] Several studies that empirically compared the performance of the hybrid with the pure collaborative and content-based methods and demonstrated that the hybrid methods can provide more accurate recommendations than pure approaches. These methods can also be used to overcome some of the common problems in recommender systems such as cold start and the sparsity problem, as well as the knowledge engineering bottleneck in knowledge-based approaches.[60]

Netflix is a good example of the use of hybrid recommender systems.[61] The website makes recommendations by comparing the watching and searching habits of similar users (i.e., collaborative filtering) as well as by offering movies that share characteristics with films that a user has rated highly (content-based filtering).

Some hybridization techniques include:

  • Weighted: Combining the score of different recommendation components numerically.
  • Switching: Choosing among recommendation components and applying the selected one.
  • Mixed: Recommendations from different recommenders are presented together to give the recommendation.
  • Cascade: Recommenders are given strict priority, with the lower priority ones breaking ties in the scoring of the higher ones.
  • Meta-level: One recommendation technique is applied and produces some sort of model, which is then the input used by the next technique.[62]

Technologies

Session-based recommender systems

These recommender systems use the interactions of a user within a session[63] to generate recommendations. Session-based recommender systems are used at YouTube[64] and Amazon.[65] These are particularly useful when history (such as past clicks, purchases) of a user is not available or not relevant in the current user session. Domains, where session-based recommendations are particularly relevant, include video, e-commerce, travel, music and more. Most instances of session-based recommender systems rely on the sequence of recent interactions within a session without requiring any additional details (historical, demographic) of the user. Techniques for session-based recommendations are mainly based on generative sequential models such as recurrent neural networks,[63][66] Transformers,[67] and other deep-learning-based approaches.[68][69]

Reinforcement learning for recommender systems

The recommendation problem can be seen as a special instance of a reinforcement learning problem whereby the user is the environment upon which the agent, the recommendation system acts upon in order to receive a reward, for instance, a click or engagement by the user.[64][70][71] One aspect of reinforcement learning that is of particular use in the area of recommender systems is the fact that the models or policies can be learned by providing a reward to the recommendation agent. This is in contrast to traditional learning techniques which rely on supervised learning approaches that are less flexible, reinforcement learning recommendation techniques allow to potentially train models that can be optimized directly on metrics of engagement, and user interest.[72]

Multi-criteria recommender systems

Multi-criteria recommender systems (MCRS) can be defined as recommender systems that incorporate preference information upon multiple criteria. Instead of developing recommendation techniques based on a single criterion value, the overall preference of user u for the item i, these systems try to predict a rating for unexplored items of u by exploiting preference information on multiple criteria that affect this overall preference value. Several researchers approach MCRS as a multi-criteria decision making (MCDM) problem, and apply MCDM methods and techniques to implement MCRS systems.[73] See this chapter[74] for an extended introduction.

Risk-aware recommender systems

The majority of existing approaches to recommender systems focus on recommending the most relevant content to users using contextual information, yet do not take into account the risk of disturbing the user with unwanted notifications. It is important to consider the risk of upsetting the user by pushing recommendations in certain circumstances, for instance, during a professional meeting, early morning, or late at night. Therefore, the performance of the recommender system depends in part on the degree to which it has incorporated the risk into the recommendation process. One option to manage this issue is DRARS, a system which models the context-aware recommendation as a bandit problem. This system combines a content-based technique and a contextual bandit algorithm.[75]

Mobile recommender systems

Mobile recommender systems make use of internet-accessing smartphones to offer personalized, context-sensitive recommendations. This is a particularly difficult area of research as mobile data is more complex than data that recommender systems often have to deal with. It is heterogeneous, noisy, requires spatial and temporal auto-correlation, and has validation and generality problems.[76]

There are three factors that could affect the mobile recommender systems and the accuracy of prediction results: the context, the recommendation method and privacy.[77] Additionally, mobile recommender systems suffer from a transplantation problem – recommendations may not apply in all regions (for instance, it would be unwise to recommend a recipe in an area where all of the ingredients may not be available).

One example of a mobile recommender system are the approaches taken by companies such as Uber and Lyft to generate driving routes for taxi drivers in a city.[76] This system uses GPS data of the routes that taxi drivers take while working, which includes location (latitude and longitude), time stamps, and operational status (with or without passengers). It uses this data to recommend a list of pickup points along a route, with the goal of optimizing occupancy times and profits.

Generative recommenders

Generative recommenders (GR) represent an approach that transforms recommendation tasks into sequential transduction problems, where user actions are treated like tokens in a generative modeling framework. In one method, known as HSTU (Hierarchical Sequential Transduction Units)[78], high-cardinality, non-stationary, and streaming datasets are efficiently processed as sequences, enabling the model to learn from trillions of parameters and to handle user action histories orders of magnitude longer than before. By turning all of the system’s varied data into a single stream of tokens and using a custom self-attention approach instead of traditional neural network layers, generative recommenders make the model much simpler and less memory-hungry. As a result, it can improve recommendation quality in test simulations and in real-world tests, while being faster than previous Transformer-based systems when handling long lists of user actions. Ultimately, this approach allows the model’s performance to grow steadily as more computing power is used, laying a foundation for efficient and scalable “foundation models” for recommendations.

The Netflix Prize

One of the events that energized research in recommender systems was the Netflix Prize. From 2006 to 2009, Netflix sponsored a competition, offering a grand prize of $1,000,000 to the team that could take an offered dataset of over 100 million movie ratings and return recommendations that were 10% more accurate than those offered by the company's existing recommender system. This competition energized the search for new and more accurate algorithms. On 21 September 2009, the grand prize of US$1,000,000 was given to the BellKor's Pragmatic Chaos team using tiebreaking rules.[79]

The most accurate algorithm in 2007 used an ensemble method of 107 different algorithmic approaches, blended into a single prediction. As stated by the winners, Bell et al.:[80]

Predictive accuracy is substantially improved when blending multiple predictors. Our experience is that most efforts should be concentrated in deriving substantially different approaches, rather than refining a single technique. Consequently, our solution is an ensemble of many methods.

Many benefits accrued to the web due to the Netflix project. Some teams have taken their technology and applied it to other markets. Some members from the team that finished second place founded Gravity R&D, a recommendation engine that's active in the RecSys community.[79][81] 4-Tell, Inc. created a Netflix project–derived solution for ecommerce websites.

A number of privacy issues arose around the dataset offered by Netflix for the Netflix Prize competition. Although the data sets were anonymized in order to preserve customer privacy, in 2007 two researchers from the University of Texas were able to identify individual users by matching the data sets with film ratings on the Internet Movie Database (IMDb).[82] As a result, in December 2009, an anonymous Netflix user sued Netflix in Doe v. Netflix, alleging that Netflix had violated United States fair trade laws and the Video Privacy Protection Act by releasing the datasets.[83] This, as well as concerns from the Federal Trade Commission, led to the cancellation of a second Netflix Prize competition in 2010.[84]

Evaluation

Performance measures

Evaluation is important in assessing the effectiveness of recommendation algorithms. To measure the effectiveness of recommender systems, and compare different approaches, three types of evaluations are available: user studies, online evaluations (A/B tests), and offline evaluations.[43]

The commonly used metrics are the mean squared error and root mean squared error, the latter having been used in the Netflix Prize. The information retrieval metrics such as precision and recall or DCG are useful to assess the quality of a recommendation method. Diversity, novelty, and coverage are also considered as important aspects in evaluation.[85] However, many of the classic evaluation measures are highly criticized.[86]

Evaluating the performance of a recommendation algorithm on a fixed test dataset will always be extremely challenging as it is impossible to accurately predict the reactions of real users to the recommendations. Hence any metric that computes the effectiveness of an algorithm in offline data will be imprecise.

User studies are rather a small scale. A few dozens or hundreds of users are presented recommendations created by different recommendation approaches, and then the users judge which recommendations are best.

In A/B tests, recommendations are shown to typically thousands of users of a real product, and the recommender system randomly picks at least two different recommendation approaches to generate recommendations. The effectiveness is measured with implicit measures of effectiveness such as conversion rate or click-through rate.

Offline evaluations are based on historic data, e.g. a dataset that contains information about how users previously rated movies.[87]

The effectiveness of recommendation approaches is then measured based on how well a recommendation approach can predict the users' ratings in the dataset. While a rating is an explicit expression of whether a user liked a movie, such information is not available in all domains. For instance, in the domain of citation recommender systems, users typically do not rate a citation or recommended article. In such cases, offline evaluations may use implicit measures of effectiveness. For instance, it may be assumed that a recommender system is effective that is able to recommend as many articles as possible that are contained in a research article's reference list. However, this kind of offline evaluations is seen critical by many researchers.[88][89][90][43] For instance, it has been shown that results of offline evaluations have low correlation with results from user studies or A/B tests.[90][91] A dataset popular for offline evaluation has been shown to contain duplicate data and thus to lead to wrong conclusions in the evaluation of algorithms.[92] Often, results of so-called offline evaluations do not correlate with actually assessed user-satisfaction.[93] This is probably because offline training is highly biased toward the highly reachable items, and offline testing data is highly influenced by the outputs of the online recommendation module.[88][94] Researchers have concluded that the results of offline evaluations should be viewed critically.[95]

Beyond accuracy

Typically, research on recommender systems is concerned with finding the most accurate recommendation algorithms. However, there are a number of factors that are also important.

  • Diversity – Users tend to be more satisfied with recommendations when there is a higher intra-list diversity, e.g. items from different artists.[96][97]
  • Recommender persistence – In some situations, it is more effective to re-show recommendations,[98] or let users re-rate items,[99] than showing new items. There are several reasons for this. Users may ignore items when they are shown for the first time, for instance, because they had no time to inspect the recommendations carefully.
  • Privacy – Recommender systems usually have to deal with privacy concerns[100] because users have to reveal sensitive information. Building user profiles using collaborative filtering can be problematic from a privacy point of view. Many European countries have a strong culture of data privacy, and every attempt to introduce any level of user profiling can result in a negative customer response. Much research has been conducted on ongoing privacy issues in this space. The Netflix Prize is particularly notable for the detailed personal information released in its dataset. Ramakrishnan et al. have conducted an extensive overview of the trade-offs between personalization and privacy and found that the combination of weak ties (an unexpected connection that provides serendipitous recommendations) and other data sources can be used to uncover identities of users in an anonymized dataset.[101]
  • User demographics – Beel et al. found that user demographics may influence how satisfied users are with recommendations.[102] In their paper they show that elderly users tend to be more interested in recommendations than younger users.
  • Robustness – When users can participate in the recommender system, the issue of fraud must be addressed.[103]
  • SerendipitySerendipity is a measure of "how surprising the recommendations are".[104][97] For instance, a recommender system that recommends milk to a customer in a grocery store might be perfectly accurate, but it is not a good recommendation because it is an obvious item for the customer to buy. "[Serendipity] serves two purposes: First, the chance that users lose interest because the choice set is too uniform decreases. Second, these items are needed for algorithms to learn and improve themselves".[105]
  • Trust – A recommender system is of little value for a user if the user does not trust the system.[106] Trust can be built by a recommender system by explaining how it generates recommendations, and why it recommends an item.
  • Labelling – User satisfaction with recommendations may be influenced by the labeling of the recommendations.[107] For instance, in the cited study click-through rate (CTR) for recommendations labeled as "Sponsored" were lower (CTR=5.93%) than CTR for identical recommendations labeled as "Organic" (CTR=8.86%). Recommendations with no label performed best (CTR=9.87%) in that study.

Reproducibility

Recommender systems are notoriously difficult to evaluate offline, with some researchers claiming that this has led to a reproducibility crisis in recommender systems publications. The topic of reproducibility seems to be a recurrent issue in some Machine Learning publication venues, but does not have a considerable effect beyond the world of scientific publication. In the context of recommender systems a 2019 paper surveyed a small number of hand-picked publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW, RecSys, IJCAI), has shown that on average less than 40% of articles could be reproduced by the authors of the survey, with as little as 14% in some conferences. The articles considers a number of potential problems in today's research scholarship and suggests improved scientific practices in that area.[108][109][110] More recent work on benchmarking a set of the same methods came to qualitatively very different results[111] whereby neural methods were found to be among the best performing methods. Deep learning and neural methods for recommender systems have been used in the winning solutions in several recent recommender system challenges, WSDM,[112] RecSys Challenge.[113] Moreover, neural and deep learning methods are widely used in industry where they are extensively tested.[114][64][65] The topic of reproducibility is not new in recommender systems. By 2011, Ekstrand, Konstan, et al. criticized that "it is currently difficult to reproduce and extend recommender systems research results," and that evaluations are "not handled consistently".[115] Konstan and Adomavicius conclude that "the Recommender Systems research community is facing a crisis where a significant number of papers present results that contribute little to collective knowledge [...] often because the research lacks the [...] evaluation to be properly judged and, hence, to provide meaningful contributions."[116] As a consequence, much research about recommender systems can be considered as not reproducible.[117] Hence, operators of recommender systems find little guidance in the current research for answering the question, which recommendation approaches to use in a recommender systems. Said and Bellogín conducted a study of papers published in the field, as well as benchmarked some of the most popular frameworks for recommendation and found large inconsistencies in results, even when the same algorithms and data sets were used.[118] Some researchers demonstrated that minor variations in the recommendation algorithms or scenarios led to strong changes in the effectiveness of a recommender system. They conclude that seven actions are necessary to improve the current situation:[117] "(1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research."

Artificial intelligence applications in recommendation

Artificial intelligence (AI) applications in recommendation systems are the advanced methodologies that leverage AI technologies, to enhance the performance recommendation engines. The AI-based recommender can analyze complex data sets, learning from user behavior, preferences, and interactions to generate highly accurate and personalized content or product suggestions.[119] The integration of AI in recommendation systems has marked a significant evolution from traditional recommendation methods. Traditional methods often relied on inflexible algorithms that could suggest items based on general user trends or apparent similarities in content. In comparison, AI-powered systems have the capability to detect patterns and subtle distinctions that may be overlooked by traditional methods.[120] These systems can adapt to specific individual preferences, thereby offering recommendations that are more aligned with individual user needs. This approach marks a shift towards more personalized, user-centric suggestions.

Recommendation systems widely adopt AI techniques such as machine learning, deep learning, and natural language processing.[121] These advanced methods enhance system capabilities to predict user preferences and deliver personalized content more accurately. Each technique contributes uniquely. The following sections will introduce specific AI models utilized by a recommendation system by illustrating their theories and functionalities.[citation needed]

KNN-based collaborative filters

Collaborative filtering (CF) is one of the most commonly used recommendation system algorithms. It generates personalized suggestions for users based on explicit or implicit behavioral patterns to form predictions.[122] Specifically, it relies on external feedback such as star ratings, purchasing history and so on to make judgments. CF make predictions about users' preference based on similarity measurements. Essentially, the underlying theory is: "if user A is similar to user B, and if A likes item C, then it is likely that B also likes item C."

There are many models available for collaborative filtering. For AI-applied collaborative filtering, a common model is called K-nearest neighbors. The ideas are as follows:

  1. Data Representation: Create a n-dimensional space where each axis represents a user's trait (ratings, purchases, etc.). Represent the user as a point in that space.
  2. Statistical Distance: 'Distance' measures how far apart users are in this space. See statistical distance for computational details
  3. Identifying Neighbors: Based on the computed distances, find k nearest neighbors of the user to which we want to make recommendations
  4. Forming Predictive Recommendations: The system will analyze the similar preference of the k neighbors. The system will make recommendations based on that similarity

Neural networks

An artificial neural network (ANN), is a deep learning model structure which aims to mimic a human brain. They comprise a series of neurons, each responsible for receiving and processing information transmitted from other interconnected neurons.[123] Similar to a human brain, these neurons will change activation state based on incoming signals (training input and backpropagated output), allowing the system to adjust activation weights during the network learning phase. ANN is usually designed to be a black-box model. Unlike regular machine learning where the underlying theoretical components are formal and rigid, the collaborative effects of neurons are not entirely clear, but modern experiments has shown the predictive power of ANN.

ANN is widely used in recommendation systems for its power to utilize various data. Other than feedback data, ANN can incorporate non-feedback data which are too intricate for collaborative filtering to learn, and the unique structure allows ANN to identify extra signal from non-feedback data to boost user experience.[121] Following are some examples:

  • Time and Seasonality: what specify time and date or a season that a user interacts with the platform
  • User Navigation Patterns: sequence of pages visited, time spent on different parts of a website, mouse movement, etc.
  • External Social Trends: information from outer social media

Two-Tower Model

The Two-Tower model is a neural architecture[124] commonly employed in large-scale recommendation systems, particularly for candidate retrieval tasks.[125] It consists of two neural networks:

  • User Tower: Encodes user-specific features, such as interaction history or demographic data.
  • Item Tower: Encodes item-specific features, such as metadata or content embeddings.

The outputs of the two towers are fixed-length embeddings that represent users and items in a shared vector space. A similarity metric, such as dot product or cosine similarity, is used to measure relevance between a user and an item.

This model is highly efficient for large datasets as embeddings can be pre-computed for items, allowing rapid retrieval during inference. It is often used in conjunction with ranking models for end-to-end recommendation pipelines.

Natural language processing

Natural language processing is a series of AI algorithms to make natural human language accessible and analyzable to a machine.[126] It is a fairly modern technique inspired by the growing amount of textual information. For application in recommendation system, a common case is the Amazon customer review. Amazon will analyze the feedbacks comments from each customer and report relevant data to other customers for reference. The recent years have witnessed the development of various text analysis models, including latent semantic analysis (LSA), singular value decomposition (SVD), latent Dirichlet allocation (LDA), etc. Their uses have consistently aimed to provide customers with more precise and tailored recommendations.

Specific applications

Academic content discovery

An emerging market for content discovery platforms is academic content.[127][128] Approximately 6000 academic journal articles are published daily, making it increasingly difficult for researchers to balance time management with staying up to date with relevant research.[10] Though traditional tools academic search tools such as Google Scholar or PubMed provide a readily accessible database of journal articles, content recommendation in these cases are performed in a 'linear' fashion, with users setting 'alarms' for new publications based on keywords, journals or particular authors.

Google Scholar provides an 'Updates' tool that suggests articles by using a statistical model that takes a researchers' authorized paper and citations as input.[10] Whilst these recommendations have been noted to be extremely good, this poses a problem with early career researchers which may be lacking a sufficient body of work to produce accurate recommendations.[10]

Decision-making

In contrast to an engagement-based ranking system employed by social media and other digital platforms, a bridging-based ranking optimizes for content that is unifying instead of polarizing.[129][130] Examples include Polis and Remesh which have been used around the world to help find more consensus around specific political issues.[130] Twitter has also used this approach for managing its community notes,[131] which YouTube planned to pilot in 2024.[132][133] Aviv Ovadya also argues for implementing bridging-based algorithms in major platforms by empowering deliberative groups that are representative of the platform's users to control the design and implementation of the algorithm.[134]

Television

As the connected television landscape continues to evolve, search and recommendation are seen as having an even more pivotal role in the discovery of content.[135] With broadband-connected devices, consumers are projected to have access to content from linear broadcast sources as well as internet television. Therefore, there is a risk that the market could become fragmented, leaving it to the viewer to visit various locations and find what they want to watch in a way that is time-consuming and complicated for them. By using a search and recommendation engine, viewers are provided with a central 'portal' from which to discover content from several sources in just one location.

See also

References

  1. ^ a b c d Ricci, Francesco; Rokach, Lior; Shapira, Bracha (2022). "Recommender Systems: Techniques, Applications, and Challenges". In Ricci, Francesco; Rokach, Lior; Shapira, Bracha (eds.). Recommender Systems Handbook (3 ed.). New York: Springer. pp. 1–35. doi:10.1007/978-1-0716-2197-4_1. ISBN 978-1-0716-2196-7.
  2. ^ Lev Grossman (May 27, 2010). "How Computers Know What We Want — Before We Do". TIME. Archived from the original on May 30, 2010. Retrieved June 1, 2015.
  3. ^ Roy, Deepjyoti; Dutta, Mala (2022). "A systematic review and research perspective on recommender systems". Journal of Big Data. 9 (59). doi:10.1186/s40537-022-00592-5.
  4. ^ a b Resnick, Paul, and Hal R. Varian. "Recommender systems." Communications of the ACM 40, no. 3 (1997): 56–58.
  5. ^ Gupta, Pankaj; Goel, Ashish; Lin, Jimmy; Sharma, Aneesh; Wang, Dong; Zadeh, Reza (2013). "WTF: the who to follow service at Twitter". Proceedings of the 22nd International Conference on World Wide Web. Association for Computing Machinery. pp. 505–514. doi:10.1145/2488388.2488433. ISBN 9781450320351.
  6. ^ Baran, Remigiusz; Dziech, Andrzej; Zeja, Andrzej (June 1, 2018). "A capable multimedia content discovery platform based on visual content analysis and intelligent data enrichment". Multimedia Tools and Applications. 77 (11): 14077–14091. doi:10.1007/s11042-017-5014-1. ISSN 1573-7721. S2CID 36511631.
  7. ^ H. Chen, A. G. Ororbia II, C. L. Giles ExpertSeer: a Keyphrase Based Expert Recommender for Digital Libraries, in arXiv preprint 2015
  8. ^ Chen, Hung-Hsuan; Gou, Liang; Zhang, Xiaolong; Giles, Clyde Lee (2011). "CollabSeer: a search engine for collaboration discovery" (PDF). Proceedings of the 11th Annual International ACM/IEEE Joint Conference on Digital Libraries. Association for Computing Machinery. pp. 231–240. doi:10.1145/1998076.1998121. ISBN 9781450307444.
  9. ^ Felfernig, Alexander; Isak, Klaus; Szabo, Kalman; Zachar, Peter (2007). "The VITA Financial Services Sales Support Environment" (PDF). In William Cheetham (ed.). Proceedings of the 19th National Conference on Innovative Applications of Artificial Intelligence, vol. 2. pp. 1692–1699. ISBN 9781577353232. ACM Copy.
  10. ^ a b c d e jobs (September 3, 2014). "How to tame the flood of literature : Nature News & Comment". Nature. 513 (7516). Nature.com: 129–130. doi:10.1038/513129a. PMID 25186906. S2CID 4460749.
  11. ^ Analysis (December 14, 2011). "Netflix Revamps iPad App to Improve Content Discovery". WIRED. Retrieved December 31, 2015.
  12. ^ Melville, Prem; Sindhwani, Vikas (2010). "Recommender Systems" (PDF). In Claude Sammut; Geoffrey I. Webb (eds.). Encyclopedia of Machine Learning. Springer. pp. 829–838. doi:10.1007/978-0-387-30164-8_705. ISBN 978-0-387-30164-8.
  13. ^ R. J. Mooney & L. Roy (1999). Content-based book recommendation using learning for text categorization. In Workshop Recom. Sys.: Algo. and Evaluation.
  14. ^ Haupt, Jon (June 1, 2009). "Last.fm: People-Powered Online Radio". Music Reference Services Quarterly. 12 (1–2): 23–24. doi:10.1080/10588160902816702. ISSN 1058-8167. S2CID 161141937.
  15. ^ a b Chen, Hung-Hsuan; Chen, Pu (January 9, 2019). "Differentiating Regularization Weights -- A Simple Mechanism to Alleviate Cold Start in Recommender Systems". ACM Transactions on Knowledge Discovery from Data. 13: 1–22. doi:10.1145/3285954. S2CID 59337456.
  16. ^ a b Rubens, Neil; Elahi, Mehdi; Sugiyama, Masashi; Kaplan, Dain (2016). "Active Learning in Recommender Systems". In Ricci, Francesco; Rokach, Lior; Shapira, Bracha (eds.). Recommender Systems Handbook (2 ed.). Springer US. pp. 809–846. doi:10.1007/978-1-4899-7637-6_24. ISBN 978-1-4899-7637-6.
  17. ^ Bobadilla, J.; Ortega, F.; Hernando, A.; Alcalá, J. (2011). "Improving collaborative filtering recommender system results and performance using genetic algorithms". Knowledge-Based Systems. 24 (8): 1310–1316. doi:10.1016/j.knosys.2011.06.005.
  18. ^ a b Elahi, Mehdi; Ricci, Francesco; Rubens, Neil (2016). "A survey of active learning in collaborative filtering recommender systems". Computer Science Review. 20: 29–50. doi:10.1016/j.cosrev.2016.05.002.
  19. ^ Andrew I. Schein; Alexandrin Popescul; Lyle H. Ungar; David M. Pennock (2002). Methods and Metrics for Cold-Start Recommendations. Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2002). : ACM. pp. 253–260. ISBN 1-58113-561-0. Retrieved February 2, 2008.
  20. ^ a b Bi, Xuan; Qu, Annie; Wang, Junhui; Shen, Xiaotong (2017). "A group-specific recommender system". Journal of the American Statistical Association. 112 (519): 1344–1353. doi:10.1080/01621459.2016.1219261. S2CID 125187672.
  21. ^ Stack, Charles. "System and method for providing recommendation of goods and services based on recorded purchasing history." U.S. Patent 7,222,085, issued May 22, 2007.
  22. ^ Herz, Frederick SM. "Customized electronic newspapers and advertisements." U.S. Patent 7,483,871, issued January 27, 2009.
  23. ^ Herz, Frederick, Lyle Ungar, Jian Zhang, and David Wachob. "System and method for providing access to data using customer profiles." U.S. Patent 8,056,100, issued November 8, 2011.
  24. ^ Harbick, Andrew V., Ryan J. Snodgrass, and Joel R. Spiegel. "Playlist-based detection of similar digital works and work creators." U.S. Patent 8,468,046, issued June 18, 2013.
  25. ^ Linden, Gregory D., Brent Russell Smith, and Nida K. Zada. "Automated detection and exposure of behavior-based relationships between browsable items." U.S. Patent 9,070,156, issued June 30, 2015.
  26. ^ "Recommender-System Software Libraries & APIs – RS_c". Retrieved November 18, 2024.
  27. ^ Ekstrand, Michael (August 21, 2018). "The LKPY Package for Recommender Systems Experiments". Computer Science Faculty Publications and Presentations. Boise State University, ScholarWorks. doi:10.18122/cs_facpubs/147/boisestate.
  28. ^ Vente, Tobias; Ekstrand, Michael; Beel, Joeran (September 14, 2023). "Introducing LensKit-Auto, an Experimental Automated Recommender System (AutoRecSys) Toolkit". Proceedings of the 17th ACM Conference on Recommender Systems. ACM. pp. 1212–1216. doi:10.1145/3604915.3610656. ISBN 979-8-4007-0241-9.
  29. ^ Zhao, Wayne Xin; Mu, Shanlei; Hou, Yupeng; Lin, Zihan; Chen, Yushuo; Pan, Xingyu; Li, Kaiyuan; Lu, Yujie; Wang, Hui; Tian, Changxin; Min, Yingqian; Feng, Zhichao; Fan, Xinyan; Chen, Xu; Wang, Pengfei (October 26, 2021). "RecBole: Towards a Unified, Comprehensive and Efficient Framework for Recommendation Algorithms". Proceedings of the 30th ACM International Conference on Information & Knowledge Management. ACM. pp. 4653–4664. arXiv:2011.01731. doi:10.1145/3459637.3482016. ISBN 978-1-4503-8446-9.
  30. ^ Li, Jiayu; Li, Hanyu; He, Zhiyu; Ma, Weizhi; Sun, Peijie; Zhang, Min; Ma, Shaoping (October 8, 2024). "ReChorus2.0: A Modular and Task-Flexible Recommendation Library". 18th ACM Conference on Recommender Systems. ACM. pp. 454–464. doi:10.1145/3640457.3688076. ISBN 979-8-4007-0505-2.
  31. ^ Michiels, Lien; Verachtert, Robin; Goethals, Bart (September 18, 2022). "RecPack: An(other) Experimentation Toolkit for Top-N Recommendation using Implicit Feedback Data". Proceedings of the 16th ACM Conference on Recommender Systems. ACM. pp. 648–651. doi:10.1145/3523227.3551472. ISBN 978-1-4503-9278-5.
  32. ^ BEEL, Joeran, et al. Paper recommender systems: a literature survey. International Journal on Digital Libraries, 2016, 17. Jg., Nr. 4, S. 305–338.
  33. ^ RICH, Elaine. User modeling via stereotypes. Cognitive science, 1979, 3. Jg., Nr. 4, S. 329–354.
  34. ^ Karlgren, Jussi. "An Algebra for Recommendations.Archived 2024-05-25 at the Wayback Machine. Syslab Working Paper 179 (1990). "
  35. ^ Karlgren, Jussi. "Newsgroup Clustering Based On User Behavior-A Recommendation Algebra Archived February 27, 2021, at the Wayback Machine." SICS Research Report (1994).
  36. ^ Karlgren, Jussi (October 2017). "A digital bookshelf: original work on recommender systems". Retrieved October 27, 2017.
  37. ^ Shardanand, Upendra, and Pattie Maes. "Social information filtering: algorithms for automating "word of mouth"." In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 210–217. ACM Press/Addison-Wesley Publishing Co., 1995.
  38. ^ Hill, Will, Larry Stead, Mark Rosenstein, and George Furnas. "Recommending and evaluating choices in a virtual community of use Archived 2018-12-21 at the Wayback Machine." In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 194–201. ACM Press/Addison-Wesley Publishing Co., 1995.
  39. ^ Resnick, Paul, Neophytos Iacovou, Mitesh Suchak, Peter Bergström, and John Riedl. "GroupLens: an open architecture for collaborative filtering of netnews." In Proceedings of the 1994 ACM conference on Computer supported cooperative work, pp. 175–186. ACM, 1994.
  40. ^ Montaner, M.; Lopez, B.; de la Rosa, J. L. (June 2003). "A Taxonomy of Recommender Agents on the Internet". Artificial Intelligence Review. 19 (4): 285–330. doi:10.1023/A:1022850703159. S2CID 16544257..
  41. ^ a b Adomavicius, G.; Tuzhilin, A. (June 2005). "Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions". IEEE Transactions on Knowledge and Data Engineering. 17 (6): 734–749. CiteSeerX 10.1.1.107.2790. doi:10.1109/TKDE.2005.99. S2CID 206742345..
  42. ^ Herlocker, J. L.; Konstan, J. A.; Terveen, L. G.; Riedl, J. T. (January 2004). "Evaluating collaborative filtering recommender systems". ACM Trans. Inf. Syst. 22 (1): 5–53. CiteSeerX 10.1.1.78.8384. doi:10.1145/963770.963772. S2CID 207731647..
  43. ^ a b c Beel, J.; Genzmehr, M.; Gipp, B. (October 2013). "A comparative analysis of offline and online evaluations and discussion of research paper recommender system evaluation" (PDF). Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems Evaluation. pp. 7–14. doi:10.1145/2532508.2532511. ISBN 978-1-4503-2465-6. S2CID 8202591. Archived from the original (PDF) on April 17, 2016. Retrieved October 22, 2013.
  44. ^ Beel, J.; Langer, S.; Genzmehr, M.; Gipp, B.; Breitinger, C. (October 2013). "Research paper recommender system evaluation: A quantitative literature survey" (PDF). Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems Evaluation. pp. 15–22. doi:10.1145/2532508.2532512. ISBN 978-1-4503-2465-6. S2CID 4411601.
  45. ^ Beel, J.; Gipp, B.; Langer, S.; Breitinger, C. (July 26, 2015). "Research Paper Recommender Systems: A Literature Survey". International Journal on Digital Libraries. 17 (4): 305–338. doi:10.1007/s00799-015-0156-0. S2CID 207035184.
  46. ^ John S. Breese; David Heckerman & Carl Kadie (1998). Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence (UAI'98). arXiv:1301.7363.
  47. ^ Breese, John S.; Heckerman, David; Kadie, Carl (1998). Empirical Analysis of Predictive Algorithms for Collaborative Filtering (PDF) (Report). Microsoft Research.
  48. ^ Koren, Yehuda; Volinsky, Chris (August 1, 2009). "Matrix Factorization Techniques for Recommender Systems". Computer. 42 (8): 30–37. CiteSeerX 10.1.1.147.8295. doi:10.1109/MC.2009.263. S2CID 58370896.
  49. ^ Sarwar, B.; Karypis, G.; Konstan, J.; Riedl, J. (2000). "Application of Dimensionality Reduction in Recommender System A Case Study".,
  50. ^ Allen, R.B. (1990). User Models: Theory, Method, Practice. International J. Man-Machine Studies.
  51. ^ Parsons, J.; Ralph, P.; Gallagher, K. (July 2004). Using viewing time to infer user preference in recommender systems. AAAI Workshop in Semantic Web Personalization, San Jose, California..
  52. ^ Sanghack Lee and Jihoon Yang and Sung-Yong Park, Discovery of Hidden Similarity on Collaborative Filtering to Overcome Sparsity Problem, Discovery Science, 2007.
  53. ^ Felício, Crícia Z.; Paixão, Klérisson V.R.; Barcelos, Celia A.Z.; Preux, Philippe (July 9, 2017). "A Multi-Armed Bandit Model Selection for Cold-Start User Recommendation". Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization (PDF). UMAP '17. Bratislava, Slovakia: Association for Computing Machinery. pp. 32–40. doi:10.1145/3079628.3079681. ISBN 978-1-4503-4635-1. S2CID 653908.
  54. ^ Collaborative Recommendations Using Item-to-Item Similarity Mappings Archived 2015-03-16 at the Wayback Machine
  55. ^ Aggarwal, Charu C. (2016). Recommender Systems: The Textbook. Springer. ISBN 978-3-319-29657-9.
  56. ^ Peter Brusilovsky (2007). The Adaptive Web. Springer. p. 325. ISBN 978-3-540-72078-2.
  57. ^ Wang, Donghui; Liang, Yanchun; Xu, Dong; Feng, Xiaoyue; Guan, Renchu (2018). "A content-based recommender system for computer science publications". Knowledge-Based Systems. 157: 1–9. doi:10.1016/j.knosys.2018.05.001.
  58. ^ Blanda, Stephanie (May 25, 2015). "Online Recommender Systems – How Does a Website Know What I Want?". American Mathematical Society. Retrieved October 31, 2016.
  59. ^ X.Y. Feng, H. Zhang, Y.J. Ren, P.H. Shang, Y. Zhu, Y.C. Liang, R.C. Guan, D. Xu, (2019), "The Deep Learning–Based Recommender System "Pubmender" for Choosing a Biomedical Publication Venue: Development and Validation Study", Journal of Medical Internet Research, 21 (5): e12957
  60. ^ Rinke Hoekstra, The Knowledge Reengineering Bottleneck, Semantic Web – Interoperability, Usability, Applicability 1 (2010) 1, IOS Press
  61. ^ Gomez-Uribe, Carlos A.; Hunt, Neil (December 28, 2015). "The Netflix Recommender System". ACM Transactions on Management Information Systems. 6 (4): 1–19. doi:10.1145/2843948.
  62. ^ Robin Burke, Hybrid Web Recommender Systems Archived 2014-09-12 at the Wayback Machine, pp. 377-408, The Adaptive Web, Peter Brusilovsky, Alfred Kobsa, Wolfgang Nejdl (Ed.), Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, Lecture Notes in Computer Science, Vol. 4321, May 2007, 978-3-540-72078-2.
  63. ^ a b Hidasi, Balázs; Karatzoglou, Alexandros; Baltrunas, Linas; Tikk, Domonkos (March 29, 2016). "Session-based Recommendations with Recurrent Neural Networks". arXiv:1511.06939 [cs.LG].
  64. ^ a b c Chen, Minmin; Beutel, Alex; Covington, Paul; Jain, Sagar; Belletti, Francois; Chi, Ed (2018). "Top-K Off-Policy Correction for a REINFORCE Recommender System". arXiv:1812.02353 [cs.LG].
  65. ^ a b Yifei, Ma; Narayanaswamy, Balakrishnan; Haibin, Lin; Hao, Ding (2020). "Temporal-Contextual Recommendation in Real-Time". Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Association for Computing Machinery. pp. 2291–2299. doi:10.1145/3394486.3403278. ISBN 978-1-4503-7998-4. S2CID 221191348.
  66. ^ Hidasi, Balázs; Karatzoglou, Alexandros (October 17, 2018). "Recurrent Neural Networks with Top-k Gains for Session-based Recommendations". Proceedings of the 27th ACM International Conference on Information and Knowledge Management. CIKM '18. Torino, Italy: Association for Computing Machinery. pp. 843–852. arXiv:1706.03847. doi:10.1145/3269206.3271761. ISBN 978-1-4503-6014-2. S2CID 1159769.
  67. ^ Kang, Wang-Cheng; McAuley, Julian (2018). "Self-Attentive Sequential Recommendation". arXiv:1808.09781 [cs.IR].
  68. ^ Li, Jing; Ren, Pengjie; Chen, Zhumin; Ren, Zhaochun; Lian, Tao; Ma, Jun (November 6, 2017). "Neural Attentive Session-based Recommendation". Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. CIKM '17. Singapore, Singapore: Association for Computing Machinery. pp. 1419–1428. arXiv:1711.04725. doi:10.1145/3132847.3132926. ISBN 978-1-4503-4918-5. S2CID 21066930.
  69. ^ Liu, Qiao; Zeng, Yifu; Mokhosi, Refuoe; Zhang, Haibin (July 19, 2018). "STAMP". Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. KDD '18. London, United Kingdom: Association for Computing Machinery. pp. 1831–1839. doi:10.1145/3219819.3219950. ISBN 978-1-4503-5552-0. S2CID 50775765.
  70. ^ Xin, Xin; Karatzoglou, Alexandros; Arapakis, Ioannis; Jose, Joemon (2020). "Self-Supervised Reinforcement Learning for Recommender Systems". arXiv:2006.05779 [cs.LG].
  71. ^ Ie, Eugene; Jain, Vihan; Narvekar, Sanmit; Agarwal, Ritesh; Wu, Rui; Cheng, Heng-Tze; Chandra, Tushar; Boutilier, Craig (2019). "SlateQ: A Tractable Decomposition for Reinforcement Learning with Recommendation Sets". Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19): 2592–2599.
  72. ^ Zou, Lixin; Xia, Long; Ding, Zhuoye; Song, Jiaxing; Liu, Weidong; Yin, Dawei (2019). "Reinforcement Learning to Optimize Long-term User Engagement in Recommender Systems". Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. KDD '19. pp. 2810–2818. arXiv:1902.05570. doi:10.1145/3292500.3330668. ISBN 978-1-4503-6201-6. S2CID 62903207.
  73. ^ Lakiotaki, K.; Matsatsinis; Tsoukias, A (March 2011). "Multicriteria User Modeling in Recommender Systems". IEEE Intelligent Systems. 26 (2): 64–76. CiteSeerX 10.1.1.476.6726. doi:10.1109/mis.2011.33. S2CID 16752808.
  74. ^ Gediminas Adomavicius; Nikos Manouselis; YoungOk Kwon. "Multi-Criteria Recommender Systems" (PDF). Archived from the original (PDF) on June 30, 2014.
  75. ^ Bouneffouf, Djallel (2013), DRARS, A Dynamic Risk-Aware Recommender System (Ph.D.), Institut National des Télécommunications
  76. ^ a b Yong Ge; Hui Xiong; Alexander Tuzhilin; Keli Xiao; Marco Gruteser; Michael J. Pazzani (2010). An Energy-Efficient Mobile Recommender System (PDF). Proceedings of the 16th ACM SIGKDD Int'l Conf. on Knowledge Discovery and Data Mining. New York City, New York: ACM. pp. 899–908. Retrieved November 17, 2011.
  77. ^ Pimenidis, Elias; Polatidis, Nikolaos; Mouratidis, Haralambos (August 3, 2018). "Mobile recommender systems: Identifying the major concepts". Journal of Information Science. 45 (3): 387–397. arXiv:1805.02276. doi:10.1177/0165551518792213. S2CID 19209845.
  78. ^ Zhai, Jiaqi; Liao, Lucy; Liu, Xing; Wang, Yueming; Li, Rui; Cao, Xuan; Gao, Leon; Gong, Zhaojie; Gu, Fangda (May 6, 2024), Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations, doi:10.48550/arXiv.2402.17152, retrieved December 20, 2024
  79. ^ a b Lohr, Steve (September 22, 2009). "A $1 Million Research Bargain for Netflix, and Maybe a Model for Others". The New York Times.
  80. ^ R. Bell; Y. Koren; C. Volinsky (2007). "The BellKor solution to the Netflix Prize" (PDF). Archived from the original (PDF) on March 4, 2012. Retrieved April 30, 2009.
  81. ^ Bodoky, Thomas (August 6, 2009). "Mátrixfaktorizáció one million dollars". Index.
  82. ^ Rise of the Netflix Hackers Archived January 24, 2012, at the Wayback Machine
  83. ^ "Netflix Spilled Your Brokeback Mountain Secret, Lawsuit Claims". WIRED. December 17, 2009. Retrieved June 1, 2015.
  84. ^ "Netflix Prize Update". Netflix Prize Forum. March 12, 2010. Archived from the original on November 27, 2011. Retrieved December 14, 2011.
  85. ^ Lathia, N., Hailes, S., Capra, L., Amatriain, X.: Temporal diversity in recommender systems[dead link]. In: Proceedings of the 33rd International ACMSIGIR Conference on Research and Development in Information Retrieval, SIGIR 2010, pp. 210–217. ACM, New York
  86. ^ Turpin, Andrew H; Hersh, William (2001). "Why batch and user evaluations do not give the same results". Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval. pp. 225–231.
  87. ^ "MovieLens dataset". September 6, 2013.
  88. ^ a b Chen, Hung-Hsuan; Chung, Chu-An; Huang, Hsin-Chien; Tsui, Wen (September 1, 2017). "Common Pitfalls in Training and Evaluating Recommender Systems". ACM SIGKDD Explorations Newsletter. 19: 37–45. doi:10.1145/3137597.3137601. S2CID 10651930.
  89. ^ Jannach, Dietmar; Lerche, Lukas; Gedikli, Fatih; Bonnin, Geoffray (June 10, 2013). "What Recommenders Recommend – an Analysis of Accuracy, Popularity, and Sales Diversity Effects". In Carberry, Sandra; Weibelzahl, Stephan; Micarelli, Alessandro; Semeraro, Giovanni (eds.). User Modeling, Adaptation, and Personalization. Lecture Notes in Computer Science. Vol. 7899. Springer Berlin Heidelberg. pp. 25–37. CiteSeerX 10.1.1.465.96. doi:10.1007/978-3-642-38844-6_3. ISBN 978-3-642-38843-9.
  90. ^ a b Turpin, Andrew H.; Hersh, William (January 1, 2001). "Why batch and user evaluations do not give the same results". Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval. SIGIR '01. New York, NY, USA: ACM. pp. 225–231. CiteSeerX 10.1.1.165.5800. doi:10.1145/383952.383992. ISBN 978-1-58113-331-8. S2CID 18903114.
  91. ^ Langer, Stefan (September 14, 2015). "A Comparison of Offline Evaluations, Online Evaluations, and User Studies in the Context of Research-Paper Recommender Systems". In Kapidakis, Sarantos; Mazurek, Cezary; Werla, Marcin (eds.). Research and Advanced Technology for Digital Libraries. Lecture Notes in Computer Science. Vol. 9316. Springer International Publishing. pp. 153–168. doi:10.1007/978-3-319-24592-8_12. ISBN 978-3-319-24591-1.
  92. ^ Basaran, Daniel; Ntoutsi, Eirini; Zimek, Arthur (2017). Proceedings of the 2017 SIAM International Conference on Data Mining. pp. 390–398. doi:10.1137/1.9781611974973.44. ISBN 978-1-61197-497-3.
  93. ^ Beel, Joeran; Genzmehr, Marcel; Langer, Stefan; Nürnberger, Andreas; Gipp, Bela (January 1, 2013). "A comparative analysis of offline and online evaluations and discussion of research paper recommender system evaluation". Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems Evaluation. RepSys '13. New York, NY, USA: ACM. pp. 7–14. CiteSeerX 10.1.1.1031.973. doi:10.1145/2532508.2532511. ISBN 978-1-4503-2465-6. S2CID 8202591.
  94. ^ Cañamares, Rocío; Castells, Pablo (July 2018). Should I Follow the Crowd? A Probabilistic Analysis of the Effectiveness of Popularity in Recommender Systems (PDF). 41st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2018). Ann Arbor, Michigan, USA: ACM. pp. 415–424. doi:10.1145/3209978.3210014. Archived from the original (PDF) on April 14, 2021. Retrieved March 5, 2021.
  95. ^ Cañamares, Rocío; Castells, Pablo; Moffat, Alistair (March 2020). "Offline Evaluation Options for Recommender Systems" (PDF). Information Retrieval. 23 (4). Springer: 387–410. doi:10.1007/s10791-020-09371-3. S2CID 213169978.
  96. ^ Ziegler CN, McNee SM, Konstan JA, Lausen G (2005). "Improving recommendation lists through topic diversification". Proceedings of the 14th international conference on World Wide Web. pp. 22–32.
  97. ^ a b Castells, Pablo; Hurley, Neil J.; Vargas, Saúl (2015). "Novelty and Diversity in Recommender Systems". In Ricci, Francesco; Rokach, Lior; Shapira, Bracha (eds.). Recommender Systems Handbook (2 ed.). Springer US. pp. 881–918. doi:10.1007/978-1-4899-7637-6_26. ISBN 978-1-4899-7637-6.
  98. ^ Joeran Beel; Stefan Langer; Marcel Genzmehr; Andreas Nürnberger (September 2013). "Persistence in Recommender Systems: Giving the Same Recommendations to the Same Users Multiple Times" (PDF). In Trond Aalberg; Milena Dobreva; Christos Papatheodorou; Giannis Tsakonas; Charles Farrugia (eds.). Proceedings of the 17th International Conference on Theory and Practice of Digital Libraries (TPDL 2013). Lecture Notes of Computer Science (LNCS). Vol. 8092. Springer. pp. 390–394. Retrieved November 1, 2013.
  99. ^ Cosley, D.; Lam, S.K.; Albert, I.; Konstan, J.A.; Riedl, J (2003). "Is seeing believing?: how recommender system interfaces affect users' opinions" (PDF). Proceedings of the SIGCHI conference on Human factors in computing systems. pp. 585–592. S2CID 8307833.
  100. ^ Pu, P.; Chen, L.; Hu, R. (2012). "Evaluating recommender systems from the user's perspective: survey of the state of the art" (PDF). User Modeling and User-Adapted Interaction: 1–39.
  101. ^ Naren Ramakrishnan; Benjamin J. Keller; Batul J. Mirza; Ananth Y. Grama; George Karypis (2001). "Privacy risks in recommender systems". IEEE Internet Computing. 5 (6). Piscataway, NJ: IEEE Educational Activities Department: 54–62. CiteSeerX 10.1.1.2.2932. doi:10.1109/4236.968832. ISBN 978-1-58113-561-9. S2CID 1977107.
  102. ^ Joeran Beel; Stefan Langer; Andreas Nürnberger; Marcel Genzmehr (September 2013). "The Impact of Demographics (Age and Gender) and Other User Characteristics on Evaluating Recommender Systems" (PDF). In Trond Aalberg; Milena Dobreva; Christos Papatheodorou; Giannis Tsakonas; Charles Farrugia (eds.). Proceedings of the 17th International Conference on Theory and Practice of Digital Libraries (TPDL 2013). Springer. pp. 400–404. Retrieved November 1, 2013.
  103. ^ Konstan JA, Riedl J (2012). "Recommender systems: from algorithms to user experience" (PDF). User Modeling and User-Adapted Interaction. 22 (1–2): 1–23. doi:10.1007/s11257-011-9112-x. S2CID 8996665.
  104. ^ Ricci F, Rokach L, Shapira B, Kantor BP (2011). Recommender systems handbook. pp. 1–35. Bibcode:2011rsh..book.....R.
  105. ^ Möller, Judith; Trilling, Damian; Helberger, Natali; van Es, Bram (July 3, 2018). "Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity". Information, Communication & Society. 21 (7): 959–977. doi:10.1080/1369118X.2018.1444076. hdl:11245.1/4242e2e0-3beb-40a0-a6cb-d8947a13efb4. ISSN 1369-118X. S2CID 149344712.
  106. ^ Montaner, Miquel; López, Beatriz; de la Rosa, Josep Lluís (2002). "Developing trust in recommender agents". Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1. pp. 304–305.
  107. ^ Beel, Joeran, Langer, Stefan, Genzmehr, Marcel (September 2013). "Sponsored vs. Organic (Research Paper) Recommendations and the Impact of Labeling" (PDF). In Trond Aalberg, Milena Dobreva, Christos Papatheodorou, Giannis Tsakonas, Charles Farrugia (eds.). Proceedings of the 17th International Conference on Theory and Practice of Digital Libraries (TPDL 2013). pp. 395–399. Retrieved December 2, 2013.
  108. ^ Ferrari Dacrema, Maurizio; Boglio, Simone; Cremonesi, Paolo; Jannach, Dietmar (January 8, 2021). "A Troubling Analysis of Reproducibility and Progress in Recommender Systems Research". ACM Transactions on Information Systems. 39 (2): 1–49. arXiv:1911.07698. doi:10.1145/3434185. hdl:11311/1164333. S2CID 208138060.
  109. ^ Ferrari Dacrema, Maurizio; Cremonesi, Paolo; Jannach, Dietmar (2019). "Are we really making much progress? A worrying analysis of recent neural recommendation approaches". Proceedings of the 13th ACM Conference on Recommender Systems. RecSys '19. ACM. pp. 101–109. arXiv:1907.06902. doi:10.1145/3298689.3347058. hdl:11311/1108996. ISBN 978-1-4503-6243-6. S2CID 196831663. Retrieved October 16, 2019.
  110. ^ Rendle, Steffen; Krichene, Walid; Zhang, Li; Anderson, John (September 22, 2020). "Neural Collaborative Filtering vs. Matrix Factorization Revisited". Fourteenth ACM Conference on Recommender Systems. pp. 240–248. arXiv:2005.09683. doi:10.1145/3383313.3412488. ISBN 978-1-4503-7583-2.
  111. ^ Sun, Zhu; Yu, Di; Fang, Hui; Yang, Jie; Qu, Xinghua; Zhang, Jie; Geng, Cong (2020). "Are We Evaluating Rigorously? Benchmarking Recommendation for Reproducible Evaluation and Fair Comparison". Fourteenth ACM Conference on Recommender Systems. ACM. pp. 23–32. doi:10.1145/3383313.3412489. ISBN 978-1-4503-7583-2. S2CID 221785064.
  112. ^ Schifferer, Benedikt; Deotte, Chris; Puget, Jean-François; de Souza Pereira, Gabriel; Titericz, Gilberto; Liu, Jiwei; Ak, Ronay. "Using Deep Learning to Win the Booking.com WSDM WebTour21 Challenge on Sequential Recommendations" (PDF). WSDM '21: ACM Conference on Web Search and Data Mining. ACM. Archived from the original (PDF) on March 25, 2021. Retrieved April 3, 2021.
  113. ^ Volkovs, Maksims; Rai, Himanshu; Cheng, Zhaoyue; Wu, Ga; Lu, Yichao; Sanner, Scott (2018). "Two-stage Model for Automatic Playlist Continuation at Scale". Proceedings of the ACM Recommender Systems Challenge 2018. ACM. pp. 1–6. doi:10.1145/3267471.3267480. ISBN 978-1-4503-6586-4. S2CID 52942462.
  114. ^ Yves Raimond, Justin Basilico Deep Learning for Recommender Systems, Deep Learning Re-Work SF Summit 2018
  115. ^ Ekstrand, Michael D.; Ludwig, Michael; Konstan, Joseph A.; Riedl, John T. (January 1, 2011). "Rethinking the recommender research ecosystem". Proceedings of the fifth ACM conference on Recommender systems. RecSys '11. New York, NY, USA: ACM. pp. 133–140. doi:10.1145/2043932.2043958. ISBN 978-1-4503-0683-6. S2CID 2215419.
  116. ^ Konstan, Joseph A.; Adomavicius, Gediminas (January 1, 2013). "Toward identification and adoption of best practices in algorithmic recommender systems research". Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems Evaluation. RepSys '13. New York, NY, USA: ACM. pp. 23–28. doi:10.1145/2532508.2532513. ISBN 978-1-4503-2465-6. S2CID 333956.
  117. ^ a b Breitinger, Corinna; Langer, Stefan; Lommatzsch, Andreas; Gipp, Bela (March 12, 2016). "Towards reproducibility in recommender-systems research". User Modeling and User-Adapted Interaction. 26 (1): 69–101. doi:10.1007/s11257-016-9174-x. ISSN 0924-1868. S2CID 388764.
  118. ^ Said, Alan; Bellogín, Alejandro (October 1, 2014). "Comparative recommender system evaluation". Proceedings of the 8th ACM Conference on Recommender systems. RecSys '14. New York, NY, USA: ACM. pp. 129–136. doi:10.1145/2645710.2645746. hdl:10486/665450. ISBN 978-1-4503-2668-1. S2CID 15665277.
  119. ^ Verma, P.; Sharma, S. (2020). "Artificial Intelligence based Recommendation System". 2020 2nd International Conference on Advances in Computing, Communication Control and Networking (ICACCCN). pp. 669–673. doi:10.1109/ICACCCN51052.2020.9362962. ISBN 978-1-7281-8337-4. S2CID 232150789.
  120. ^ Khanal, S.S. (July 2020). "A systematic review: machine learning based recommendation systems for e-learning". Educ Inf Technol. 25 (4): 2635–2664. doi:10.1007/s10639-019-10063-9. S2CID 254475908.
  121. ^ a b Zhang, Q. (February 2021). "Artificial intelligence in recommender systems". Complex and Intelligent Systems. 7: 439–457. doi:10.1007/s40747-020-00212-w.
  122. ^ Wu, L. (May 2023). "A Survey on Accuracy-Oriented Neural Recommendation: From Collaborative Filtering to Information-Rich Recommendation". IEEE Transactions on Knowledge and Data Engineering. 35 (5): 4425–4445. arXiv:2104.13030. doi:10.1109/TKDE.2022.3145690.
  123. ^ Samek, W. (March 2021). "Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications". Proceedings of the IEEE. 109 (3): 247–278. arXiv:2003.07631. doi:10.1109/JPROC.2021.3060483.
  124. ^ Yi, X., Hong, L., Zhong, E., Tewari, A., & Dhillon, I. S. (2019). "A scalable two-tower model for estimating user interest in recommendations." Proceedings of the 13th ACM Conference on Recommender Systems.
  125. ^ Google Cloud Blog. \"Scaling Deep Retrieval with Two-Tower Models.\" Published November 30, 2022. Accessed December 2024.
  126. ^ Eisenstein, J. (October 2019). Introduction to natural language processing. MIT press. ISBN 9780262042840.
  127. ^ Mirkin, Sima (June 4, 2014). ""Extending and Customizing Content Discovery for the Legal Academic Com" by Sima Mirkin". Articles in Law Reviews & Other Academic Journals. Digital Commons @ American University Washington College of Law. Retrieved December 31, 2015.
  128. ^ "Mendeley, Elsevier and the importance of content discovery to academic publishers". Archived from the original on November 17, 2014. Retrieved December 8, 2014.
  129. ^ Thorburn, Luke; Ovadya, Aviv (October 31, 2023). "Social media algorithms can be redesigned to bridge divides — here's how". Nieman Lab. Retrieved July 17, 2024.
  130. ^ a b Ovadya, Aviv (May 17, 2022). "Bridging-Based Ranking". Belfer Center at Harvard University. pp. 1, 14–28. Retrieved July 17, 2024.
  131. ^ Smalley, Alex Mahadevan, Seth (November 8, 2022). "Elon Musk keeps Birdwatch alive — under a new name". Poynter. Retrieved July 17, 2024.{{cite web}}: CS1 maint: multiple names: authors list (link)
  132. ^ Shanklin, Will (June 17, 2024). "YouTube's community notes feature rips a page out of X's playbook". Engadget. Retrieved July 17, 2024.
  133. ^ Novak, Matt (June 17, 2024). "YouTube Adding Experimental Community Notes Feature to Battle Misinformation". Gizmodo. Retrieved July 17, 2024.
  134. ^ Ovadya, Aviv (May 17, 2022). "Bridging-Based Ranking". Belfer Center for Science and International Affairs at Harvard University. pp. 21–23. Retrieved July 17, 2024.
  135. ^ The New Face of TV

Further reading

Books
  • Kim Falk (d 2019), Practical Recommender Systems, Manning Publications, ISBN 9781617292705
  • Bharat Bhasker; K. Srikumar (2010). Recommender Systems in E-Commerce. CUP. ISBN 978-0-07-068067-8. Archived from the original on September 1, 2010.
  • Jannach, Dietmar; Markus Zanker; Alexander Felfernig; Gerhard Friedrich (2010). Recommender Systems: An Introduction. CUP. ISBN 978-0-521-49336-9. Archived from the original on August 31, 2015.
  • Seaver, Nick (2022). Computing Taste: Algorithms and the Makers of Music Recommendation. University of Chicago Press.
Scientific articles