[ADMA 2017] Identification of Grey Sheep Users By Histogram Intersection In R...YONG ZHENG
The document proposes a new approach to identify "grey sheep users" in recommender systems. Grey sheep users have unusual tastes and low correlations with other users. The approach represents each user as a histogram of their similarities to other users. It then uses outlier detection on the histograms to identify grey sheep users as the outliers with low similarities. The approach is tested on movie rating data and is shown to better identify grey sheep users compared to other methods. Future work involves applying this approach to other datasets and improving recommendations for identified grey sheep users.
[RIIT 2017] Identifying Grey Sheep Users By The Distribution of User Similari...YONG ZHENG
Yong Zheng, Mayur Agnani, Mili Singh. “Identifying Grey Sheep Users By The Distribution of User Similarities In Collaborative Filtering”. Proceedings of The 6th ACM Conference on Research in Information Technology (RIIT), Rochester, NY, USA, October, 2017
[IUI 2017] Criteria Chains: A Novel Multi-Criteria Recommendation ApproachYONG ZHENG
This paper proposes a novel approach called Criteria Chains for multi-criteria recommender systems. Criteria Chains predicts ratings across multiple criteria in a chain, using previous predictions as context. It outperforms baselines by better utilizing relationships between criteria. The best method is to rank criteria by information gain to generate the chain, then use predicted criteria as context (CCC approach) to estimate the overall rating. Future work includes optimizing chain generation beyond information gain.
This paper proposes a method called user-oriented context suggestion that suggests contexts to users based on their preferences. It aims to maximize user experience by recommending not just good items, but appropriate contexts for those items. Two algorithms are developed: one based on contextual rating deviations that identifies how a user's ratings change across contexts, and another that adapts techniques from item-oriented context suggestion. An evaluation on a music dataset finds the tensor factorization approach performs best, with the contextual rating deviations method also outperforming a simple baseline. Future work includes collecting better evaluation data and trying other contextual recommendation algorithms.
[EMPIRE 2016] Adapt to Emotional Reactions In Context-aware PersonalizationYONG ZHENG
This document discusses using emotions as context in recommender systems. It proposes two models that utilize emotional reactions data from movie ratings to improve context-aware recommender system algorithms. The models apply emotional regularization techniques to matrix factorization. One model regularizes based on similar emotional users, while another also considers original user similarities. Tests on a movie rating dataset show improvements over baselines, with emotional state during consumption more effective than after. Future work could explore emotional transitions over time.
Tutorial: Context-awareness In Information Retrieval and Recommender SystemsYONG ZHENG
The document provides an overview of a tutorial on context-awareness in information retrieval and recommender systems. It discusses topics such as information overload, solutions like information retrieval (e.g. search engines) and recommender systems (e.g. movie recommendations). It then covers context and context-awareness, giving examples like how recommendations may change based on location, time, user intent, etc. It also discusses incorporating context-awareness into information retrieval and recommender systems to improve recommendations.
ACM ICTIR 2019 Slides - Santa Clara, USAIadh Ounis
This document proposes a novel weak supervision approach to unify explicit and implicit feedback for rating prediction and ranking recommendation tasks. It trains an explicit feedback model to annotate implicit feedback with predicted ratings. This allows training a new model on the annotated data, improving ranking accuracy while increasing coverage of long-tail items compared to baselines. Evaluation on multiple datasets shows the approach enhances recommendation for both rating prediction and ranking, with less popularity bias than models using only explicit or implicit feedback.
[ADMA 2017] Identification of Grey Sheep Users By Histogram Intersection In R...YONG ZHENG
The document proposes a new approach to identify "grey sheep users" in recommender systems. Grey sheep users have unusual tastes and low correlations with other users. The approach represents each user as a histogram of their similarities to other users. It then uses outlier detection on the histograms to identify grey sheep users as the outliers with low similarities. The approach is tested on movie rating data and is shown to better identify grey sheep users compared to other methods. Future work involves applying this approach to other datasets and improving recommendations for identified grey sheep users.
[RIIT 2017] Identifying Grey Sheep Users By The Distribution of User Similari...YONG ZHENG
Yong Zheng, Mayur Agnani, Mili Singh. “Identifying Grey Sheep Users By The Distribution of User Similarities In Collaborative Filtering”. Proceedings of The 6th ACM Conference on Research in Information Technology (RIIT), Rochester, NY, USA, October, 2017
[IUI 2017] Criteria Chains: A Novel Multi-Criteria Recommendation ApproachYONG ZHENG
This paper proposes a novel approach called Criteria Chains for multi-criteria recommender systems. Criteria Chains predicts ratings across multiple criteria in a chain, using previous predictions as context. It outperforms baselines by better utilizing relationships between criteria. The best method is to rank criteria by information gain to generate the chain, then use predicted criteria as context (CCC approach) to estimate the overall rating. Future work includes optimizing chain generation beyond information gain.
This paper proposes a method called user-oriented context suggestion that suggests contexts to users based on their preferences. It aims to maximize user experience by recommending not just good items, but appropriate contexts for those items. Two algorithms are developed: one based on contextual rating deviations that identifies how a user's ratings change across contexts, and another that adapts techniques from item-oriented context suggestion. An evaluation on a music dataset finds the tensor factorization approach performs best, with the contextual rating deviations method also outperforming a simple baseline. Future work includes collecting better evaluation data and trying other contextual recommendation algorithms.
[EMPIRE 2016] Adapt to Emotional Reactions In Context-aware PersonalizationYONG ZHENG
This document discusses using emotions as context in recommender systems. It proposes two models that utilize emotional reactions data from movie ratings to improve context-aware recommender system algorithms. The models apply emotional regularization techniques to matrix factorization. One model regularizes based on similar emotional users, while another also considers original user similarities. Tests on a movie rating dataset show improvements over baselines, with emotional state during consumption more effective than after. Future work could explore emotional transitions over time.
Tutorial: Context-awareness In Information Retrieval and Recommender SystemsYONG ZHENG
The document provides an overview of a tutorial on context-awareness in information retrieval and recommender systems. It discusses topics such as information overload, solutions like information retrieval (e.g. search engines) and recommender systems (e.g. movie recommendations). It then covers context and context-awareness, giving examples like how recommendations may change based on location, time, user intent, etc. It also discusses incorporating context-awareness into information retrieval and recommender systems to improve recommendations.
ACM ICTIR 2019 Slides - Santa Clara, USAIadh Ounis
This document proposes a novel weak supervision approach to unify explicit and implicit feedback for rating prediction and ranking recommendation tasks. It trains an explicit feedback model to annotate implicit feedback with predicted ratings. This allows training a new model on the annotated data, improving ranking accuracy while increasing coverage of long-tail items compared to baselines. Evaluation on multiple datasets shows the approach enhances recommendation for both rating prediction and ranking, with less popularity bias than models using only explicit or implicit feedback.
• Performed memory-based collaborative filtering techniques like Cosine similarities, Pearson’s r & model-based Matrix Factorization techniques like Alternating Least Squares (ALS) method
• Studied the scalability of these methods on local machines & on Hadoop clusters
The document discusses recommender systems and describes several techniques used in collaborative filtering recommender systems including k-nearest neighbors (kNN), singular value decomposition (SVD), and similarity weights optimization (SWO). It provides examples of how these techniques work and compares kNN to SWO. The document aims to explain state-of-the-art recommender system methods.
Collaborative filtering is a technique used by recommender systems to predict items users may like based on opinions of similar users. K-nearest neighbors (KNN) is a collaborative filtering algorithm that finds the k most similar users and bases predictions on the ratings of those neighbors. The document describes KNN collaborative filtering, including finding neighbor similarity, making predictions, and evaluating error rates on a movie recommendation system using the MovieLens dataset.
Movie recommendation system using collaborative filtering system Mauryasuraj98
The document describes a mini project on building a movie recommendation system. It includes an abstract that discusses different recommendation approaches like demographic, content-based, and collaborative filtering. It also outlines the problem statement, proposed solution, workflow, dataset description, algorithm details, GUI design, result analysis, and applications. The system uses a user-based collaborative filtering model to recommend movies to users based on their preferences and ratings of similar users. Evaluation shows it has good prediction performance.
The goal of a recommender system is to predict the degree to which a user will like or dislike a set of items, such as movies or TV shows.
Most recommender systems use a combination of different approaches, but broadly speaking there are three different methods that can be used: Content analysis, Social recommendations and Collaborative filtering.
This document discusses recommender systems, including:
1. It provides an overview of recommender systems, their history, and common problems like top-N recommendation and rating prediction.
2. It then discusses what makes a good recommender system, including experiment methods like offline, user surveys, and online experiments, as well as evaluation metrics like prediction accuracy, diversity, novelty, and user satisfaction.
3. Key metrics that are important to evaluate recommender systems are discussed, such as user satisfaction, prediction accuracy, coverage, diversity, novelty, serendipity, trust, robustness, and response time. The document emphasizes selecting metrics based on business goals.
The document discusses social recommender systems and how they can improve on traditional collaborative filtering approaches by incorporating trust relationships between users. It outlines research that used trust propagation algorithms to make recommendations for cold start users who lack sufficient rating histories. The author proposes to further explore how different types of social relationships (e.g. trust, friendship) differentially impact recommendation performance and to evaluate social and similarity-based collaborative filtering approaches.
GTC 2021: Counterfactual Learning to Rank in E-commerceGrubhubTech
Many ecommerce companies have extensive logs of user behavior such as clicks and conversions. However, if supervised learning is naively applied, then systems can suffer from poor performance due to bias and feedback loops. Using techniques from counterfactual learning we can leverage log data in a principled manner in order to model user behaviour and build personalized recommender systems. At Grubhub, a user journey begins with recommendations and the vast majority of conversions are powered by recommendations. Our recommender policies can drive user behavior to increase orders and/or profit. Accordingly, the ability to rapidly iterate and experiment is very important. Because of our powerful GPU workflows, we can iterate 200% more rapidly than with counterpart CPU workflows. Developers iterate ideas with notebooks powered by GPUs. Hyperparameter spaces are explored up to 8x faster with multi-GPUs Ray clusters. Solutions are shipped from notebooks to production in half the time with nbdev. With our accelerated DS workflows and Deep Learning on GPUs, we were able to deliver a +12.6% conversion boost in just a few months. In this talk we hope to present modern techniques for industrial recommender systems powered by GPU workflows. First a small background on counterfactual learning techniques, then followed by practical information and data from our industrial application.
By Alex Egg, accepted to Nvidia GTC 2021 Conference
Active Learning in Collaborative Filtering Recommender Systems : a SurveyUniversity of Bergen
In collaborative filtering recommender systems user’s preferences are expressed as ratings for items, and each additional rating extends the knowledge of the system and affects the system’s recommendation accuracy. In general, the more ratings are elicited from the users, the more effective the recommendations are. However, the usefulness of each rating may vary significantly, i.e., different ratings may bring a different amount and type of information about the user’s tastes. Hence, specific techniques, which are defined as “active learning strategies”, can be used to selectively choose the items to be presented to the user for rating. In fact, an active learning strategy identifies and adopts criteria for obtaining data that better reflects users’ preferences and enables to generate better recommendations.
Collaborative filtering is a technique used in recommender systems to predict a user's preferences based on other similar users' preferences. It involves collecting ratings data from users, calculating similarities between users or items, and making recommendations. Common approaches include user-user collaborative filtering, item-item collaborative filtering, and probabilistic matrix factorization. Recommender systems are evaluated both offline using metrics like MAE and RMSE, and through online user testing.
This document summarizes a tutorial on replicable evaluation of recommender systems presented at ACM RecSys 2015. The tutorial covered background on recommender systems and motivation for proper evaluation. It discussed evaluating recommender systems as a "black box" process involving data splitting, recommendation generation, candidate item selection, and metric computation. The presenters emphasized the importance of replicating and reproducing evaluation results to validate findings and advance the field. They provided guidelines for reproducible experimental design and highlighted the need to distinguish between replicability and reproducibility. The tutorial included a demonstration of replicating results and concluded by discussing next steps like agreeing on standard implementations and incentivizing reproducibility.
Machine Learning based Hybrid Recommendation System
• Developed a Hybrid Movie Recommendation System using both Collaborative and Content-based methods
• Used linear regression framework for determining optimal feature weights from collaborative data
• Recommends movie with maximum similarity score of content-based data
Best Practices in Recommender System ChallengesAlan Said
Recommender System Challenges such as the Netflix Prize, KDD Cup, etc. have contributed vastly to the development and adoptability of recommender systems. Each year a number of challenges or contests are organized covering different aspects of recommendation. In this tutorial and panel, we present some of the factors involved in successfully organizing a challenge, whether for reasons purely related to research, industrial challenges, or to widen the scope of recommender systems applications.
This document provides an introduction to recommender systems. Recommender systems aim to provide personalized recommendations to help users make decisions by automating strategies for filtering information. The book covers collaborative, content-based, knowledge-based, and hybrid recommendation approaches as well as explanations, evaluations, and applications of recommender systems. It is intended to provide an overview of the field for researchers and professionals.
This presentation discusses recommender systems and collaborative filtering algorithms. It covers two main types of recommender systems: content-based filtering and collaborative filtering. Content-based filtering uses item attributes and user preferences to recommend similar items, while collaborative filtering relies on user ratings and purchases to find similar users and recommend items they liked. The presentation outlines the key steps and algorithms for each approach, including calculating similarity matrices and using k-nearest neighbors. It also discusses challenges for recommender systems like data sparsity and overfitting.
Tutorial: Context In Recommender SystemsYONG ZHENG
This document provides an overview of a tutorial on context-aware recommender systems. The tutorial will cover traditional recommendation techniques, context-aware recommendation which incorporates additional contextual information such as time and location, and context suggestion. It includes an agenda with topics, background information on recommender systems and evaluation metrics, and descriptions of techniques for context-aware recommendation including context filtering and modeling.
Context-aware Recommendation: A Quick ViewYONG ZHENG
Context-aware recommendation systems take into account additional contextual information beyond just the user and item, such as time, location, and companion. There are three main approaches: contextual prefiltering splits items or users based on context; contextual modeling directly integrates context into models like matrix factorization; and CARSKit is an open source Java library for building context-aware recommender systems.
[Decisions2013@RecSys]The Role of Emotions in Context-aware RecommendationYONG ZHENG
The document discusses the role of emotions in context-aware recommender systems (CARS). It explores two classes of CARS algorithms: context-aware splitting approaches and differential context modeling. For context-aware splitting approaches, it examines which emotional contexts are most frequently used to split items or users. For differential context modeling, it analyzes which emotional dimensions are selected or weighted most highly for different algorithm components. The experimental results found that the emotions of end emotion and dominant emotion were the most influential across approaches. User splitting also generally outperformed item splitting.
[SAC 2015] Improve General Contextual SLIM Recommendation Algorithms By Facto...YONG ZHENG
This document summarizes a research paper that improves on a previous context-aware recommender system algorithm called GCSLIM by factorizing contexts to address its sparsity problem. The paper introduces GCSLIM and its drawback of measuring context deviations in pairs, which can result in unknown deviations when new context combinations are encountered. To solve this, the paper represents each context as a vector and calculates deviations as the Euclidean distance between vectors. Experimental results on a restaurant dataset show improved precision and MAP over baselines. The conclusions discuss how factorizing contexts can alleviate but not fully solve sparsity, and future work to address cold start issues.
Context-aware recommender systems (CARS) help improve the effectiveness of recommendations by adapting to users' preferences in different contextual situations. One approach to CARS that has been shown to be particularly effective is Context-Aware Matrix Factorization (CAMF). CAMF incorporates contextual dependencies into the standard matrix factorization (MF) process, where users and items are represented as collections of weights over various latent factors. In this paper, we introduce another CARS approach based on an extension of matrix factorization, namely, the Sparse Linear Method (SLIM). We develop a family of deviation-based contextual SLIM (CSLIM) recommendation algorithms by learning rating deviations in different contextual conditions. Our CSLIM approach is better at explaining the underlying reasons behind contextual recommendations, and our experimental evaluations over five context-aware data sets demonstrate that these CSLIM algorithms outperform the state-of-the-art CARS algorithms in the top-$N$ recommendation task. We also discuss the criteria for selecting the appropriate CSLIM algorithm in advance based on the underlying characteristics of the data.
• Performed memory-based collaborative filtering techniques like Cosine similarities, Pearson’s r & model-based Matrix Factorization techniques like Alternating Least Squares (ALS) method
• Studied the scalability of these methods on local machines & on Hadoop clusters
The document discusses recommender systems and describes several techniques used in collaborative filtering recommender systems including k-nearest neighbors (kNN), singular value decomposition (SVD), and similarity weights optimization (SWO). It provides examples of how these techniques work and compares kNN to SWO. The document aims to explain state-of-the-art recommender system methods.
Collaborative filtering is a technique used by recommender systems to predict items users may like based on opinions of similar users. K-nearest neighbors (KNN) is a collaborative filtering algorithm that finds the k most similar users and bases predictions on the ratings of those neighbors. The document describes KNN collaborative filtering, including finding neighbor similarity, making predictions, and evaluating error rates on a movie recommendation system using the MovieLens dataset.
Movie recommendation system using collaborative filtering system Mauryasuraj98
The document describes a mini project on building a movie recommendation system. It includes an abstract that discusses different recommendation approaches like demographic, content-based, and collaborative filtering. It also outlines the problem statement, proposed solution, workflow, dataset description, algorithm details, GUI design, result analysis, and applications. The system uses a user-based collaborative filtering model to recommend movies to users based on their preferences and ratings of similar users. Evaluation shows it has good prediction performance.
The goal of a recommender system is to predict the degree to which a user will like or dislike a set of items, such as movies or TV shows.
Most recommender systems use a combination of different approaches, but broadly speaking there are three different methods that can be used: Content analysis, Social recommendations and Collaborative filtering.
This document discusses recommender systems, including:
1. It provides an overview of recommender systems, their history, and common problems like top-N recommendation and rating prediction.
2. It then discusses what makes a good recommender system, including experiment methods like offline, user surveys, and online experiments, as well as evaluation metrics like prediction accuracy, diversity, novelty, and user satisfaction.
3. Key metrics that are important to evaluate recommender systems are discussed, such as user satisfaction, prediction accuracy, coverage, diversity, novelty, serendipity, trust, robustness, and response time. The document emphasizes selecting metrics based on business goals.
The document discusses social recommender systems and how they can improve on traditional collaborative filtering approaches by incorporating trust relationships between users. It outlines research that used trust propagation algorithms to make recommendations for cold start users who lack sufficient rating histories. The author proposes to further explore how different types of social relationships (e.g. trust, friendship) differentially impact recommendation performance and to evaluate social and similarity-based collaborative filtering approaches.
GTC 2021: Counterfactual Learning to Rank in E-commerceGrubhubTech
Many ecommerce companies have extensive logs of user behavior such as clicks and conversions. However, if supervised learning is naively applied, then systems can suffer from poor performance due to bias and feedback loops. Using techniques from counterfactual learning we can leverage log data in a principled manner in order to model user behaviour and build personalized recommender systems. At Grubhub, a user journey begins with recommendations and the vast majority of conversions are powered by recommendations. Our recommender policies can drive user behavior to increase orders and/or profit. Accordingly, the ability to rapidly iterate and experiment is very important. Because of our powerful GPU workflows, we can iterate 200% more rapidly than with counterpart CPU workflows. Developers iterate ideas with notebooks powered by GPUs. Hyperparameter spaces are explored up to 8x faster with multi-GPUs Ray clusters. Solutions are shipped from notebooks to production in half the time with nbdev. With our accelerated DS workflows and Deep Learning on GPUs, we were able to deliver a +12.6% conversion boost in just a few months. In this talk we hope to present modern techniques for industrial recommender systems powered by GPU workflows. First a small background on counterfactual learning techniques, then followed by practical information and data from our industrial application.
By Alex Egg, accepted to Nvidia GTC 2021 Conference
Active Learning in Collaborative Filtering Recommender Systems : a SurveyUniversity of Bergen
In collaborative filtering recommender systems user’s preferences are expressed as ratings for items, and each additional rating extends the knowledge of the system and affects the system’s recommendation accuracy. In general, the more ratings are elicited from the users, the more effective the recommendations are. However, the usefulness of each rating may vary significantly, i.e., different ratings may bring a different amount and type of information about the user’s tastes. Hence, specific techniques, which are defined as “active learning strategies”, can be used to selectively choose the items to be presented to the user for rating. In fact, an active learning strategy identifies and adopts criteria for obtaining data that better reflects users’ preferences and enables to generate better recommendations.
Collaborative filtering is a technique used in recommender systems to predict a user's preferences based on other similar users' preferences. It involves collecting ratings data from users, calculating similarities between users or items, and making recommendations. Common approaches include user-user collaborative filtering, item-item collaborative filtering, and probabilistic matrix factorization. Recommender systems are evaluated both offline using metrics like MAE and RMSE, and through online user testing.
This document summarizes a tutorial on replicable evaluation of recommender systems presented at ACM RecSys 2015. The tutorial covered background on recommender systems and motivation for proper evaluation. It discussed evaluating recommender systems as a "black box" process involving data splitting, recommendation generation, candidate item selection, and metric computation. The presenters emphasized the importance of replicating and reproducing evaluation results to validate findings and advance the field. They provided guidelines for reproducible experimental design and highlighted the need to distinguish between replicability and reproducibility. The tutorial included a demonstration of replicating results and concluded by discussing next steps like agreeing on standard implementations and incentivizing reproducibility.
Machine Learning based Hybrid Recommendation System
• Developed a Hybrid Movie Recommendation System using both Collaborative and Content-based methods
• Used linear regression framework for determining optimal feature weights from collaborative data
• Recommends movie with maximum similarity score of content-based data
Best Practices in Recommender System ChallengesAlan Said
Recommender System Challenges such as the Netflix Prize, KDD Cup, etc. have contributed vastly to the development and adoptability of recommender systems. Each year a number of challenges or contests are organized covering different aspects of recommendation. In this tutorial and panel, we present some of the factors involved in successfully organizing a challenge, whether for reasons purely related to research, industrial challenges, or to widen the scope of recommender systems applications.
This document provides an introduction to recommender systems. Recommender systems aim to provide personalized recommendations to help users make decisions by automating strategies for filtering information. The book covers collaborative, content-based, knowledge-based, and hybrid recommendation approaches as well as explanations, evaluations, and applications of recommender systems. It is intended to provide an overview of the field for researchers and professionals.
This presentation discusses recommender systems and collaborative filtering algorithms. It covers two main types of recommender systems: content-based filtering and collaborative filtering. Content-based filtering uses item attributes and user preferences to recommend similar items, while collaborative filtering relies on user ratings and purchases to find similar users and recommend items they liked. The presentation outlines the key steps and algorithms for each approach, including calculating similarity matrices and using k-nearest neighbors. It also discusses challenges for recommender systems like data sparsity and overfitting.
Tutorial: Context In Recommender SystemsYONG ZHENG
This document provides an overview of a tutorial on context-aware recommender systems. The tutorial will cover traditional recommendation techniques, context-aware recommendation which incorporates additional contextual information such as time and location, and context suggestion. It includes an agenda with topics, background information on recommender systems and evaluation metrics, and descriptions of techniques for context-aware recommendation including context filtering and modeling.
Context-aware Recommendation: A Quick ViewYONG ZHENG
Context-aware recommendation systems take into account additional contextual information beyond just the user and item, such as time, location, and companion. There are three main approaches: contextual prefiltering splits items or users based on context; contextual modeling directly integrates context into models like matrix factorization; and CARSKit is an open source Java library for building context-aware recommender systems.
[Decisions2013@RecSys]The Role of Emotions in Context-aware RecommendationYONG ZHENG
The document discusses the role of emotions in context-aware recommender systems (CARS). It explores two classes of CARS algorithms: context-aware splitting approaches and differential context modeling. For context-aware splitting approaches, it examines which emotional contexts are most frequently used to split items or users. For differential context modeling, it analyzes which emotional dimensions are selected or weighted most highly for different algorithm components. The experimental results found that the emotions of end emotion and dominant emotion were the most influential across approaches. User splitting also generally outperformed item splitting.
[SAC 2015] Improve General Contextual SLIM Recommendation Algorithms By Facto...YONG ZHENG
This document summarizes a research paper that improves on a previous context-aware recommender system algorithm called GCSLIM by factorizing contexts to address its sparsity problem. The paper introduces GCSLIM and its drawback of measuring context deviations in pairs, which can result in unknown deviations when new context combinations are encountered. To solve this, the paper represents each context as a vector and calculates deviations as the Euclidean distance between vectors. Experimental results on a restaurant dataset show improved precision and MAP over baselines. The conclusions discuss how factorizing contexts can alleviate but not fully solve sparsity, and future work to address cold start issues.
Context-aware recommender systems (CARS) help improve the effectiveness of recommendations by adapting to users' preferences in different contextual situations. One approach to CARS that has been shown to be particularly effective is Context-Aware Matrix Factorization (CAMF). CAMF incorporates contextual dependencies into the standard matrix factorization (MF) process, where users and items are represented as collections of weights over various latent factors. In this paper, we introduce another CARS approach based on an extension of matrix factorization, namely, the Sparse Linear Method (SLIM). We develop a family of deviation-based contextual SLIM (CSLIM) recommendation algorithms by learning rating deviations in different contextual conditions. Our CSLIM approach is better at explaining the underlying reasons behind contextual recommendations, and our experimental evaluations over five context-aware data sets demonstrate that these CSLIM algorithms outperform the state-of-the-art CARS algorithms in the top-$N$ recommendation task. We also discuss the criteria for selecting the appropriate CSLIM algorithm in advance based on the underlying characteristics of the data.
[UMAP 2015] Integrating Context Similarity with Sparse Linear Recommendation ...YONG ZHENG
This document summarizes a research paper on integrating context similarity with sparse linear recommendation models. It discusses contextual modeling approaches, including independent contextual modeling using tensor factorization and dependent contextual modeling using deviation-based and similarity-based approaches. It presents the sparse linear method (SLIM) and a contextual extension (CSLIM) that incorporates context similarity. Four methods for modeling context similarity - independent, latent, weighted Jaccard, and multidimensional - are described. Experimental evaluations on limited context-aware datasets are conducted to compare baseline algorithms like tensor factorization to the new similarity-based CSLIM approaches.
[RecSys 2014] Deviation-Based and Similarity-Based Contextual SLIM Recommenda...YONG ZHENG
Yong Zheng. "Deviation-Based and Similarity-Based Contextual SLIM Recommendation Algorithms". ACM RecSys Doctoral Symposium, Proceedings of the 8th ACM Conference on Recommender Systems (ACM RecSys 2014), pp. 437-440, Silicon Valley, CA, USA, Oct 2014 [Doctoral Symposium, Acceptance rate: 47%]
This paper proposes a similarity-based approach for contextual modeling in context-aware recommender systems. It introduces three methods for representing context similarity - independent, latent, and multidimensional - and applies them to context-aware matrix factorization and sparse linear models. Experimental results on four datasets show the multidimensional context similarity approach outperforms deviation-based contextual modeling and independent context modeling. The paper concludes similarity-based contextual modeling provides a general way to incorporate contexts and recommends exploring solutions to reduce costs in multidimensional modeling and applying other base recommender algorithms.
[SAC2014]Splitting Approaches for Context-Aware Recommendation: An Empirical ...YONG ZHENG
This document describes an empirical study that compares different context-aware recommendation approaches. It evaluates three context-aware splitting approaches (item splitting, user splitting, and UI splitting) on several datasets using different recommendation algorithms and impurity criteria for splitting. The results show that UI splitting generally performs the best when used with matrix factorization as the recommendation algorithm. The document also compares the splitting approaches to other context-aware recommendation methods like differential context modeling and context-aware matrix factorization. The goal is to better understand how different context-aware techniques compare and which may be most appropriate depending on the data and application.
[IUI2015] A Revisit to The Identification of Contexts in Recommender SystemsYONG ZHENG
This document proposes a framework for identifying contexts in context-aware recommender systems (CARS). It defines contexts as any information that characterizes a user's situation. The framework models activities as having subjects (users), objects (items or other users), and actions (interactions). It provides three rules for context identification: 1) attributes of actions are contexts, 2) some dynamic attributes in user profiles can be contexts, and 3) some attributes of user objects can be contexts in social networks. The framework aims to clarify what should be considered contexts versus item content to improve CARS development and analysis.
Matrix Factorization In Recommender SystemsYONG ZHENG
The document discusses matrix factorization techniques for recommender systems. It begins with an overview of recommender systems and their use of matrix factorization for dimensionality reduction. Principal component analysis and singular value decomposition are described as early linear algebra techniques used for this purpose. The document then focuses on how these techniques evolved into basic and extended matrix factorization methods in recommender systems, using the Netflix Prize competition as an example.
This thesis proposes designing and developing a personalized country recommender system. It outlines introducing the problem motivation and research questions. The document then reviews the state of the art on recommender systems including definitions, data sources, approaches (collaborative filtering, content-based filtering, hybrid filtering), and evaluation metrics. It describes the methodology which includes collecting a training dataset, implementing recommender algorithms (SVD, KNN, co-clustering), and system design. The results and evaluation of the system are then presented.
Classification and Detection of Micro-Level Impact-CSCW2017 (Link: http://dl....R R
Rezapour R, Diesner J (2017) Classification and Detection of Micro-Level Impact of Issue-Focused Films based on Reviews. Proceedings of 20th ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2017), Portland, OR.
Recommender Systems supporting Decision Making through Analysis of User Emoti...Marco Polignano
1) The speaker proposes a framework for incorporating emotions and personality traits into recommender systems to better support decision making.
2) Key aspects of the framework include modeling a user's affective profile containing personality traits, historical decision cases and emotions, and using this to generate personalized recommendations.
3) The recommender system would identify a user's emotions during decision making using implicit and explicit strategies, and store this along with past decisions as cases in the affective profile knowledge base.
Systemic Design Toolkit - Systems Innovation BarcelonaPeter Jones
The Systemic Design Toolkit represents a formalized set of methods and research tools designed by Namahn and developed with collaboration by me (SDA) and Alex Ryan of MaRS. The Toolkit can be discovered at https://www.systemicdesigntoolkit.org/
ENTERTAINMENT CONTENT RECOMMENDATION SYSTEM USING MACHINE LEARNINGIRJET Journal
This document describes a content-based movie recommendation system using machine learning techniques. It discusses how content-based filtering utilizes metadata like plot, cast, and genre to recommend similar movies. Term frequency-inverse document frequency and cosine similarity are used to measure similarity between movies. Sentiment analysis with naive Bayes classification determines if reviews are positive or negative. The system was tested on IMDb data and achieved 98.77% accuracy for sentiment analysis. Users can search movies and receive recommendations, view movie details, and rate results to improve recommendations. Future work includes incorporating location data and ratings from other sites into a hybrid recommendation model.
A recommender system(RS) is an information filtering system that recommends the related suggestions as per the end users requirement. Applications of RS include recommendation of movies, music, serials, books, documents, websites, tourist places etc.
Benefits of RS: RSs are beneficial to both service providers and to the users. RSs reduce transaction costs of finding and selecting items.& RSs help in decision making. The proposed work DEMOGRAPHY BASED HYBRID SYSTEM FOR MOVIE RECOMMENDATIONS highlights the combination of collaborative, content based & demographic filtering to recommend movies to the end user. The model uses SVD++ technique available in Surprise Python library and achieves the MSE of 0.92 which is comparatively less than the other techniques.
Yusuke Goto (iwate Pref. Univ.) and Shingo Takahashi (Waseda Univ.)
How Scenario Analysis Can Contribute to ABMS Validation
The 7th International Workshop on Agent-based Approaches in Economic and Social Complex Systems
January 17, 2012 (Osaka, Japan)
Yifan Guo is a PhD student at Case Western Reserve University studying machine learning and big data. He received his B.S. from Beijing University of Posts and Telecommunications and his Master's from Northwestern University. His research projects include developing an image recognition system for identifying pill types, building a movie recommendation system using matrix factorization, and designing an algorithm for a nonlinear integer programming transportation problem.
The Human Factor in Digital Recommender SystemsSIMAdmin
The document discusses a study on how users perceive and interact with Netflix's recommender systems. 8 interviews were conducted where participants discussed their criteria for evaluating recommendations, experiences with recommender systems, and opinions on Netflix's recommendations. A coding schema was developed to analyze the interviews and looked at factors like viewing preferences, the role of serendipity, and the importance of trust in recommendations. Key findings included that common genres and talents were important finding aids for participants and that their level of trust in a recommendation source impacted its perceived value.
How to use LLMs for creating a content-based recommendation system for entert...mahaffeycheryld
To utilize Large Language Models (LLMs) for content-based recommendation systems in entertainment platforms, follow these steps:
Data Collection: Gather diverse datasets of entertainment content with metadata.
Preprocessing: Clean, tokenize, and encode textual data for model input.
Model Selection: Choose an LLM architecture like GPT-3 and fine-tune it on the dataset.
Feature Extraction: Extract relevant features from the data, such as genre, keywords, and sentiment.
Recommendation Generation: Utilize the fine-tuned LLM to generate personalized recommendations based on user preferences and content features.
Evaluation and Optimization: Assess recommendation quality and iterate for continual improvement.
https://www.leewayhertz.com/build-content-based-recommendation-for-entertainment-using-llms/
The study reviewed literature on ICT for governance and policy modelling to identify gaps in the research area. It found 4 relevant background references and listed 64 related projects with about 80M euros in total grant funding. However, it did not include any primary research or analysis. The study concluded that more work is needed to address gaps in using ICT for governance and policy modelling, but it did not specify what particular gaps need to be addressed.
The document discusses a content-based recommendation system with sentiment analysis. It provides an overview of recommendation systems and their importance. The objectives are to provide personalized recommendations to users based on their preferences using information filtering techniques. Existing systems faced issues like scalability, sparsity, and cold starts. The proposed system is a hybrid approach that combines item-based collaborative filtering with user clustering to make predictions. It will be scalable while addressing cold starts. Tools like Flask, JavaScript, Python are used. Cosine similarity and sentiment analysis techniques are also discussed. The conclusion is that the proposed system can recommend less popular items and future work could include other factors in recommendations.
Develop a robust and effective book recommendation system that provides personalized suggestions to users, enhancing their reading experience and promoting diverse literary exploration.
Anticipation 2017 Assembling Requisite Stakeholder VarietyPeter Jones
This document discusses ensuring variety in stakeholder representation in foresight practices to reduce cognitive biases. It notes that foresight methods often mix to reduce reliance on one, but variety is also needed in stakeholder perspectives represented. Without accounting for cognitive and temporal biases in who is selected, four points of failure can occur: biased framing, biased content selection, horizon bias in stakeholders, and insufficient variety. The document advocates for evolutionary sampling to map categories related to the issue and minimize influence of biases, expanding variety both within the issue and beyond the future system. It also discusses accounting for individuals' temporal preferences to avoid horizon biases within groups.
Human-centered AI: how can we support lay users to understand AI?Katrien Verbert
The document summarizes research on human-centered AI and how to support lay users in understanding AI. It discusses various research projects that aim to explain model outcomes to increase user trust and acceptance. It explores how personal characteristics like need for cognition can impact the effectiveness of explanations. The research also looks at different application domains for AI like healthcare, education, agriculture and recommendations. It emphasizes the importance of user involvement, personalization and domain expertise in developing AI systems that non-experts can understand and trust.
This document outlines a research study that aims to identify the creative ideation process used by advertising students. It will use Interactive Qualitative Analysis (IQA), a modified grounded theory approach, to analyze focus groups and interviews with students. The study seeks to map out the key elements (affinities) of the creative process and how they relate, in order to develop a system that describes the advertising-specific creative ideation process used by students. This will help address gaps in existing linear models of creativity and provide insights to improve creative training.
PRESENTATION ON DECISION MAKING MODULES GROUP WORK SUESuelette Leonard
This document provides an overview of decision making models, techniques, and factors. It outlines the presentation topics which include background on decision making models, critical discussion of models with examples, quantitative tools and techniques, and considering environmental factors. Quantitative techniques discussed include regression analysis, probability theory, linear programming, integer programming, network analysis, queuing theory, simulation, and learning curves. Environmental factors affecting decision making include personal demographics, culture, social class, intimate groups, secondary groups, information, and psychological factors.
IRJET- Sentimental Analysis on Audio and Video using Vader Algorithm -Monali ...IRJET Journal
This document presents a proposed system for performing sentiment analysis on audio and video reviews from social media platforms. The system first collects audio and video data from sites like YouTube and Facebook. It then separates the audio and video files, converts them to .wav format, and extracts text from the audio and video files. This extracted text is then analyzed using the VADER sentiment analysis algorithm to determine the sentiment polarity (positive, negative, neutral) expressed in the text. VADER is a lexicon-based approach that rates words based on sentiment and calculates overall sentiment scores. The proposed system aims to analyze sentiment in audio and video reviews to better understand user opinions expressed across various social media platforms.
Unit IV Knowledge and Hybrid Recommendation System.pdfArthyR3
This document details the knowledge based recommendation system and hybrid recommendation system. A knowledge and hybrid recommendation system combines the capabilities of knowledge-based and hybrid recommendation systems to provide personalized recommendations to users.
Similar a [WI 2017] Affective Prediction By Collaborative Chains In Movie Recommendation (20)
[WI 2014]Context Recommendation Using Multi-label ClassificationYONG ZHENG
This document proposes a new type of recommender system called a context recommender that recommends appropriate contexts (e.g. time, location, companion) for users to consume items. It discusses how context recommenders are different than traditional and context-aware recommenders. It also presents the framework for context recommenders including algorithms using multi-label classification to directly predict contexts. The document reports on experiments comparing these algorithms on several datasets and finds that personalized algorithms outperform non-personalized ones and that certain multi-label classification algorithms like label powerset using support vector machines achieve the best performance.
[UMAP2013] Recommendation with Differential Context WeightingYONG ZHENG
Context-aware recommender systems (CARS) adapt their recommendations to users’ specific situations. In many recommender systems, particularly those based on collaborative filtering, the contextual constraints may lead to sparsity: fewer matches between the current user context and previous situations. Our earlier work proposed an approach called differential context relaxation (DCR), in which different subsets of contextual features were applied in different components of a recommendation algorithm. In this paper, we expand on our previous work on DCR, proposing a more general approach — differential context weighting (DCW), in which contextual features are weighted. We compare DCR and DCW on two real-world datasets, and DCW demonstrates improved accuracy over DCR with comparable coverage. We also show that particle swarm optimization (PSO) can be used to efficiently determine the weights for DCW.
[SOCRS2013]Differential Context Modeling in Collaborative FilteringYONG ZHENG
This document discusses differential context modeling (DCM) in collaborative filtering recommender systems. DCM is a framework that separates recommender algorithms into components and applies differential context constraints to each component to maximize contextual effects. The document applies DCM using differential context relaxation and weighting to item-based collaborative filtering and Slope One recommender algorithms. Experimental results on movie and food rating datasets show that differential context weighting improves predictive performance over baselines and differential context relaxation. Future work involves expanding DCM to additional recommender algorithms and optimizing performance.
This document provides an overview of slope one recommender algorithms and their implementation in distributed systems using Hadoop and Mahout. It discusses slope one and weighted slope one recommenders, how they are implemented in Mahout, and how Mahout runs them in a distributed manner on Hadoop using mappers and reducers. It then describes experiments run on MovieLens data using this distributed slope one implementation and analyzes the results.
The document provides an outline for a manual on writing a Ph.D. dissertation. It discusses introducing the dissertation, how to write and organize it, dissertation style, and good habits for writing a dissertation. Key sections include outlining the dissertation process and milestones, differences between papers/theses, common dissertation skeleton structures, principles for organizing sections, and tips for writing early and getting feedback.
A topic trend can be inferred by the usage of tags -- we name it as attention. Time-series analysis for tagging prediction can indicate the evolution of attention flow. This side takes political analysis for example, using time-series technique and discover interesting patterns.
[CARS2012@RecSys]Optimal Feature Selection for Context-Aware Recommendation u...YONG ZHENG
This document summarizes a research paper on optimal feature selection for context-aware recommendation systems using differential relaxation. The paper proposes a differential context relaxation (DCR) model that applies different context relaxations to different components of a recommendation algorithm to maximize their contributions. It uses binary particle swarm optimization to efficiently find optimal context relaxations and outperforms exhaustive search. Experimental results on a food preference dataset show the effects of different contexts and context-linked features. The paper discusses limitations and opportunities for future work to address sparsity issues.
[ECWEB2012]Differential Context Relaxation for Context-Aware Travel Recommend...YONG ZHENG
Context-aware recommendation (CARS) has been shown to be an effective approach to recommendation in a number of domains. However, the problem of identifying appropriate contextual variables remains: using too many contextual variables risks a drastic increase in dimensionality and a loss of accuracy in recommendation. In this paper, we propose a novel treatment of context – identifying influential contexts for different algorithm components instead of for the whole algorithm. Based on this idea, we take traditional user-based collaborative filtering (CF) as an example, decompose it into three context-sensitive components, and propose a hybrid contextual approach. We then identify appropriate relaxations of contextual constraints for each algorithm component. The effectiveness of context relaxation is demonstrated by comparison of three algorithms using a travel data set: a contenxt-ignorant approach, contextual pre-filtering, and our hybrid contextual algorithm. The experiments show that choosing an appropriate relaxation of the contextual constraints for each component of an algorithm outperforms strict application of the context.
[HetRec2011@RecSys]Experience Discovery: Hybrid Recommendation of Student Act...YONG ZHENG
The aim of the Experience Discovery project is to recommend extracurricular activities to high school and middle school students in urban areas. In implementing this system, we have been able to make use of both usage data and data drawn from a social networking site. Using pilot data, we are able to show that very simple aggregation techniques applied to the social network can improve recommendation accuracy.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Astute Business Solutions | Oracle Cloud Partner |
[WI 2017] Affective Prediction By Collaborative Chains In Movie Recommendation
1. Affective Prediction By Collaborative
Chains In Movie Recommendation
Yong Zheng
School of Applied Technology
Illinois Institute of Technology
Chicago, IL, 60616, USA
The 2017 IEEE/WIC/ACM Conference on Web Intelligence (WI)
August 23-26, 2017, Leipzig, Germany
2. Agenda
• Background and Introduction
– Context-aware Recommender Systems
– Emotions In Recommender Systems
• Research Problems
– Emotion Acquisition
– Affective Predictions
• Methodologies and Results
• Conclusions and Future Work
2
3. Agenda
• Background and Introduction
– Context-aware Recommender Systems
– Emotions In Recommender Systems
• Research Problems
– Emotion Acquisition
– Affective Predictions
• Methodologies and Results
• Conclusions and Future Work
3
5. Context-Aware Recommendation
5
Companion
User’s decision may vary from contexts to contexts
• Examples:
➢ Travel destination: in winter vs in summer
➢ Movie watching: with children vs with partner
➢ Restaurant: quick lunch vs business dinner
➢ Music: for workout vs for study
6. Terminology in CARS
6
• Example of Multi-dimensional Context-aware Data set
➢Context Dimension: time, location, companion
➢Context Condition: Weekend/Weekday, Home/Cinema
➢Context Situation: {Weekend, Home, Kids}
User Item Rating Time Location Companion
U1 T1 3 Weekend Home Kids
U1 T2 5 Weekday Home Partner
U2 T2 2 Weekend Cinema Partner
U2 T3 3 Weekday Cinema Family
U1 T3 ? Weekend Cinema Kids
7. What is Context?
7
The most common contextual variables:
➢Time and Location
➢User intent or purpose
➢User emotional states
➢Devices
➢Topics of interests, e.g., apple vs. Apple
➢Others: companion, weather, budget, etc
Usually, the selection/definition of contexts is a domain-specific problem
9. Incorporate Emotional Effects into RecSys
9
• Marko Tkalcic, Andrej Kosir, and Jurij Tasic. 2011. Affective recommender
systems: the role of emotions in recommender systems. In Proc. The RecSys
2011 Workshop on Human Decision Making in Recommender Systems. ACM, 9–
13
• Ante Odic, Marko Tkalcic, Jurij F Tasic, and Andrej Košir. 2012. Relevant context
in a movie recommender system: Users' opinion vs. statistical detection. ACM
RecSys 12 (2012)
• Yue Shi, Martha Larson, and Alan Hanjalic. 2013. Mining contextual movie
similarity with matrix factorization for context-aware recommendation. ACM
Transactions on Intelligent Systems and Technology (TIST) 4, 1 (2013), 16.
• Yong Zheng, Bamshad Mobasher, and Robin Burke. 2016. Emotions in context-
aware recommender systems. In Emotions and Personality in Personalized
Services. Springer, 311–326
• Yong Zheng. 2016. Adapt to Emotional Reactions In Context-aware
Personalization. In 4th Workshop on Emotions and Personality in Personalized
Systems (EMPIRE) 2016 co-located with ACM RecSys 2016
10. Agenda
• Background and Introduction
– Context-aware Recommender Systems
– Emotions In Recommender Systems
• Research Problems
– Emotion Acquisition
– Affective Predictions
• Methodologies and Results
• Conclusions and Future Work
10
11. Emotion Acquisition
11
We can collect emotions
➢By user surveys
➢By special user interactions, such as emoji
➢By Emotion Recognition or Extraction, e.g., from
texts, voice, facial expressions, etc
➢By Affective Prediction – a learning process to
predict emotional states from limited knowledge at
hand
13. Challenges in Affective Prediction
13
There are correlations between emotions in two stages. For
example, a user may feel sad before watching a movie. He may
be dissatisfied with the movie and leave a negative reaction after
the movie watching
14. Research Problems
14
We focus on the following problems:
➢How to better predict affective states
➢How to take emotion correlations into account
15. Agenda
• Background and Introduction
– Context-aware Recommender Systems
– Emotions In Recommender Systems
• Research Problems
– Emotion Acquisition
– Affective Predictions
• Methodologies and Results
• Conclusions and Future Work
15
16. LDOS-CoMoDa Movie Data Set
16
There are 2291 ratings given by 121 users on 1232
movies. There are 12 contextual dimensions
17. 1. Independent Emotion Classification (IEC)
17
The problem is viewed as a classification problem
➢Features: user info and item features
➢Label(s): emotional variables
We use a binary classification algorithm to predict
the binary value for each emotional variable
independently.
18. 2. Dependent Emotion Classification (DEC)
18
For example, Classification Chains
➢Features: user info and item features
➢Label(s): emotional variables
19. 3. Independent Collaborative Prediction (ICP)
19
We choose collaborative filtering as the predictive
model, since it may work better on personalization
than the classification.
We select one-class matrix factorization with side
information as the model in our experiments.
• Yi Fang and Luo Si. 2011. Matrix co-factorization for
recommendation with rich side information and implicit feedback.
In Proceedings of the 2nd Workshop on Information Heterogeneity
and Fusion in Recommender Systems. ACM, 65–69
20. 4. Dependent Collaborative Chains (DCC)
20
We select one-class matrix factorization with side
information as the model in our experiments.
21. Experimental Settings
21
➢We use the LDOS-CoMoDa movie rating data
➢5-fold cross validation is applied
➢We predict the emotions for the test set first, and
examine the accuracy of the predictions
➢The predicted emotions will be incorporated into one
context-aware recommendation models to examine the
quality of context-aware recommendations.
23. Quality of the Context-aware Recommendations
23
• Yong Zheng. 2016. Adapt to Emotional Reactions In Context-aware
Personalization. In 4th Workshop on Emotions and Personality in Personalized
Systems (EMPIRE) 2016 co-located with ACM RecSys 2016 [ the
recommendation model used in the paper]
• Actual the performance when we use the actual emotions
• Predicted the performance when we use the predicted emotions
24. Agenda
• Background and Introduction
– Context-aware Recommender Systems
– Emotions In Recommender Systems
• Research Problems
– Emotion Acquisition
– Affective Predictions
• Methodologies and Results
• Conclusions and Future Work
24
25. Conclusions
25
➢We explore the affective predictions
➢We predict the emotions by classification and collaborative
filtering respectively
➢For each solution, we figure out a way to incorporate
correlations among emotions
➢Collaborative predictions can help improve the quality of
personalizations
➢The dependent collaborative chains is demonstrated as the
best predictive model
➢The predicted emotional states can also help obtain good
context-aware recommendations.
26. Future Work
26
➢We plan to evaluate the proposed models in other
domains rather than the movie domain only
➢The problem of affective prediction is closely related to
a novel research topic – context suggestion, where we
predict or recommend appropriate contexts to the end
users.
➢In our future work, we will try to utilize the context
suggestion as solutions to help predict the emotional
states
27. Affective Prediction By Collaborative
Chains In Movie Recommendation
Yong Zheng
School of Applied Technology
Illinois Institute of Technology
Chicago, IL, 60616, USA
The 2017 IEEE/WIC/ACM Conference on Web Intelligence (WI)
August 23-26, 2017, Leipzig, Germany