99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

COMP9727 代做、代寫 Java/Python 程序語言

時間:2024-06-23  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



OMP9727: Recommender Systems

Assignment: Content-Based Movie Recommendation

Due Date:Week 4, Friday, June 21, 5:00 p.m.

Value:30%

This assignment is inspired by a typical application of recommender systems. The task is to

build a content-based “movie recommender” such as might be used by a streaming service (such

as Netflix) or review site (such as IMDb) to give users a personalizedlist of movies that match

their interests. The main learning objective for the assignment is togive a concrete example of

the issues that must be faced when building and evaluating a recommender system in a realistic

context. Note that, while movie recommender systems commonly make use of user ratings, our

scenario is not unrealistic as often all that a movie recommender system has are basic summaries

of the movies and the watch histories of the users.

For this assignment, you will be given a collection of 2000 movies that have been labelled as one

of 8 main genres (topics):animation,comedy,drama,family,horror,romance,sci-fiandthriller.

The movies of each genre are in a separate.tsvfile named for the genre (such asanimation.tsv)

with 7 fields:title,year,genre,director,cast,summaryandcountry.

The assignment is in three parts, corresponding to the components of a content-based recommender

system. The focus throughout is onexplanationof choices andevaluationof the various methods

and models, which involves choosing and justifying appropriate metrics. The whole assignment

will be prepared (and submitted) as a Jupyter notebook, similar to those being used in tutorials,

that contains a mixture of running code and tutorial-style explanation.

Part 1 of the assignment is to examine various supervised machine learning methods using a variety

of features and settings to determine what methods work best for topic (genre) classification in

this domain/dataset. For this purpose, simply concatenate all theinformation for one movie into

a single “document”. You will use Bernoulli Naive Bayes from the tutorial, Multinomial Naive

Bayes from the lecture, and one other machine learning method of your choice from scikit-learn

or another machine learning library, and NLTK for auxiliary functionsif needed.

Part 2 of the assignment is to test a potential recommender system that uses the method for

topic classification chosen in Part 1 by “simulating” a recommender system with a variety of

hypothetical users. This involves evaluating a number of techniques for “matching” user profiles

with movies using the similarity measures mentioned in the lecture. As we do not have real users,

for this part of the assignment, we will simply “invent” some (hopefully typical) users and evaluate

how well the recommender system would work for them, using appropriate metrics. Again you

will need to justify the choice of these metrics and explain how you arrived at your conclusions.

Part 3 of the assignment is to run a very small “user study” which means here findingoneperson,

preferably not someone in the class, to try out your recommendation method and give some

informal comments on the performance of your system from the user point of view. This does

not require any user interface to be built, the user can simply be shown the output (or use) the

Jupyter notebook from Parts 1 and 2. However, you will have to decide how many movies to show

the user at any one time, and how to get feedback from them on which movies they would click on

and which movies match their interests. A simple “talk aloud” protocol is a good idea here (this

is where you ask the user to use your system and say out loud what they are thinking/doing at

the same time – however please do not record the user’s voice – for that we need ethics approval).

Note that standard UNSW late penalties apply.

Assignment

Below are a series of questions to guide you through this assignment. Your answer to each question

should be in a separate clearly labelled section of the Jupyter notebook you submit. Each answer

should contain a mixture of explanation and code. Use comments in the code to explain any code

that you think readers will find unclear. The “readers” here are students similar to yourselves

who know something about machine learning and text classification but who may not be familiar

with the details of the methods.

Part 1. Topic (Genre) Classification

1. (2 marks) There are a few simplifications in the Jupyter notebookin the tutorial: (i) the regex

might remove too many special characters, and (ii) the evaluation isbased on only one training-

test split rather than using cross-validation. Explain how you are going to fix these mistakes and

then highlight any changes to the code in the answers to the next questions.

2. (2 marks) Develop a Multinomial Naive Bayes (MNB) model similar to the Bernoulli Naive

Bayes (BNB) model. Now consider all the steps in text preprocessing used prior to classification

with both BNB and MNB. The aim here is to find preprocessing steps that maximize overall ac-

curacy (under the default settings of the classifiers and usingCountVectorizerwith the standard

settings). Consider the special characters to be removed (and how and when they are removed),

the definition of a “word”, the stopword list (from either NLTK or scikit-learn), lowercasing and

stemming/lemmatization. Summarize the preprocessing steps thatyou think work “best” overall

and do not change this for the rest of the assignment.

3. (2 marks) Compare BNB and MNB models by evaluating them using the full dataset with

cross-validation. Choose appropriate metrics from those in the lecture that focus on the overall

accuracy of classification (i.e. not top-N metrics). Briefly discuss the tradeoffs between the various

metrics and then justify your choice of the main metrics for evaluation, taking into account whether

this dataset is balanced or imbalanced. On this basis, conclude whether either of BNB or MNB is

superior. Justify this conclusion with plots/tables.

4. (2 marks) Consider varying the number of features (words) used by BNB and MNB in the

classification, using thesklearnsetting which limits the number to the top N most frequent

words in the Vectorizer. Compare classification results for variousvalues for N and justify, based

on experimental results, one value for N that works well overall and use this value for the rest

of the assignment. Show plots or tables that support your decision. The emphasis is on clear

presentation of the results so do not print out large tables or too many tables that are difficult to

understand.

5. (5 marks) Choose one other machine learning method, perhaps one mentioned in the lecture.

Summarize this method in a single tutorial-style paragraph and explainwhy you think it is suitable

for topic classification for this dataset (for example, maybe otherpeople have used this method

for a similar problem). Use the implementation of this method from a standard machine learning

library such assklearn(notother people’s code from the Internet) to implement this method on

the news dataset using the same text preprocessing as for BNB and MNB. If the method has any

hyperparameters for tuning, explain how you will select those settings (or use the default settings),

and present a concrete hypothesis for how this method will compare to BNB and MNB.

Conduct experiments (and show the code for these experiments)using cross-validation and com-

ment on whether you confirmed (or not) your hypothesis. Finally, compare this method to BNB

and MNB on the metrics you used in Step 3 and choose one overall “best” method and settings

for topic classification.

Part 2. Recommendation Methods

1. (6 marks) The aim is to use the information retrieval algorithms for “matching” user profiles

to “documents” described in the lecture as a recommendation method. The overall idea is that

the classifier from Part 1 will assign a new movie to one of the 8 genres, and this movie will be

recommended to the user if the tf-idf vector for the movie is similar to the tf-idf vector for the

profile of the user in the predicted genre. The user profile for eachgenre will consist of the words,

or top M words, representing the interests of the user in that genre, computed as a tf-idf vector

across all movies predicted in that genre of interest to the user.

To get started, assume there is “training data” for the user profiles and “test data” for the

recommender defined as follows. There are 250 movies in each file. Suppose that the order in the

file is the time ordering of the movies, and suppose these movies camefrom a series of weeks, with

50 movies from each week. Assume Weeks 1–3 (movies 1–150) form the training data and Week 4

(movies 151–200) are the test data. UseTfidfVectorizeron all documents in the training data

to create a tf-idf matrix that defines a vector for each document(movie) in the training set.

Use these tf-idf values to define auser profile, which consists of a vector for each of the 8 genres.

To do this, for each genre, combine the movies from the training setpredicted to be in that genre

that the user “likes” into one (larger) document, so there will be 8 documents, one for each genre,

and use the vectorizer defined above to define a tf-idf vector foreach such document (genre).

Unfortunately we do not have any real users for our recommender system (because it has not yet

been built!), but we want some idea of how well it would perform. We invent two hypothetical

users, and simulate their use of the system. We specify the interests of each user with a set of

keywords for each genre. These user profiles can be found in the filesuser1.tsvanduser2.tsv

where each line in the file is a genre and (followed by a tab) a list of keywords. All the words are

case insensitive.Important: Although we

know the pairing of the genres and keywords,

all the recommender system “knows” is what movies the user liked in each genre.

Develop user profiles for User 1 and User 2 from the simulated training data (notthe keywords

used to define their interests) by supposing they liked all the moviesfrom Weeks 1–3 that matched

their interests and were predicted to be in the right category, i.e. assume the true genre is not

known, but instead the topic classifier is used to predict the movie genre, and the movie is shown

to the user listed under that genre. Print the top 20 words in their profiles for each of the genres.

Comment if these words seem reasonable.

Define another hypothetical “user” (User 3) by choosing different keywords across a range of

genres (perhaps those that match your interests or those of someone you know), and print the

top 20 keywords in their profile for each of their topics of interest.Comment if these words seem

reasonable.

2. (6 marks) Suppose a user sees N recommended movies and “likes”some of them. Choose and

justify appropriate metrics to evaluate the performance of the recommendation method. Also

choose an appropriate value for N based on how you think the movieswill be presented. Pay

attention to the large variety of movies and the need to obtain useful feedback from the user (i.e.

they must likesomemovies shown to them).

Evaluate the performance of the recommendation method by testing how well the top N movies

that the recommender suggests for Week 4, based on the user profiles, match the interests of each

user. That is, assume that each user likes all and only those movies inthe top N recommendations

that matched their profile for the predicted (not true) genre (where N is your chosen value). State

clearly whether you are showing N movies in total or N movies per genre. As part of the analysis,

consider various values for M, the number of words in the user profile for each genre, compared to

using all words.

Show the metrics for some of the matching algorithms to see which performs better for Users 1,

2 and 3. Explain any differences between the users. On the basis of these results, choose one

algorithm for matching user profiles and movies and explain your decision.

Part 3. User Evaluation

1. (5 marks) Conduct a “user study” of a hypothetical recommender system based on the method

chosen in Part 2. Your evaluation in Part 2 will have included a choice ofthe number N of movies

to show the user at any one time. For simplicity, suppose the user uses your system once per

week. Simulate running the recommender system for 3 weeks and training the model at the end

of Week 3 using interaction data obtained from the user, and testing the recommendations that

would be provided to that user in Week 4.

Choose one friendly “subject” and ask them to view (successively over a period of 4 simulated

weeks) N movies chosen at random for each “week”, for Weeks 1, 2and 3, and then (after training

the model) the recommended movies from Week 4. The subject couldbe someone else from the

course, but preferably is someone without knowledge of recommendation algorithms who will give

useful and unbiased feedback.

To be more precise, the user is shown 3 randomly chosen batches ofN movies, one batch from

Week 1 (N movies from 1–50), one batch from Week 2 (N movies from 51–100), and one batch

from Week 3 (N movies from 101–150), and says which of these they“like”. This gives training

data from which you can then train a recommendation model using the method in Part 2. The

user is then shown a batch ofrecommendedmovies from Week 4 (N movies from 151–200) in rank

order, and metrics are calculated based on which ofthesemovies the user likes. Show all these

metrics in a suitable form (plots or tables).

Ask the subject to talk aloud but make sure you find out which moviesthey are interested in.

Calculate and show the various metrics for the Week 4 recommendedmovies that you would show

using the model developed in Part 2. Explain any differences betweenmetrics calculated in Part 2

and the metrics obtained from the real user. Finally, mention any general user feedback concerning

the quality of the recommendations.

Submission and Assessment

?Please include your name and zid at the start of the notebook.

?Submit your notebook files using the following command:

give cs9727 asst .ipynb

You can check that your submission has been received using the command:

9727 classrun -check asst

?Assessment criteria include the correctness and thoroughness of code and experimental anal-

ysis, clarity and succinctness of explanations, and presentation quality.

Plagiarism

Remember that ALL work submitted for this assignment must be your own work and no sharing

or copying of code or answers is allowed. You may discuss the assignment with other students but

must not collaborate on developing answers to the questions. You may use code from the Internet

only with suitable attribution of the source. You may not use ChatGPT or any similar software to

generate any part of your explanations, evaluations or code. Do not use public code repositories

on sites such as github or file sharing sites such as Google Drive to save any part of your work –

make sure your code repository or cloud storage is private and do not share any links. This also

applies after you have finished the course, as we do not want next year’s students accessing your

solution, and plagiarism penalties can still apply after the course hasfinished.

All submitted assignments will be run through plagiarism detection software to detect similarities

to other submissions, including from past years. You shouldcarefullyread the UNSW policy on

academic integrity and plagiarism (linked from the course web page),noting, in particular, that

collusion(working together on an assignment, or sharing parts of assignment solutions) is a form

of plagiarism.

Finally, do not use any contract cheating “academies” or online “tutoring” services. This counts

as serious misconduct with heavy penalties up to automatic failure ofthe course with 0 marks,

and expulsion from the university for repeat offenders.

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp







 

掃一掃在手機打開當前頁
  • 上一篇:菲律賓黑名單多長時間解除?應該如何處理
  • 下一篇:DDES9903 代做、代寫 java/Python 編程設計
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    2025年10月份更新拼多多改銷助手小象助手多多出評軟件
    2025年10月份更新拼多多改銷助手小象助手多
    有限元分析 CAE仿真分析服務-企業/產品研發/客戶要求/設計優化
    有限元分析 CAE仿真分析服務-企業/產品研發
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
  • 短信驗證碼 trae 豆包網頁版入口 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

          9000px;">

                在线观看91精品国产入口| 亚洲妇熟xx妇色黄| 欧美中文字幕不卡| 国产精品亚洲人在线观看| 午夜伦理一区二区| 一区二区三区欧美亚洲| 国产亚洲一区二区在线观看| 欧美色手机在线观看| 色菇凉天天综合网| 欧美在线观看18| 9i在线看片成人免费| kk眼镜猥琐国模调教系列一区二区 | 一级做a爱片久久| 又紧又大又爽精品一区二区| 亚洲精品欧美综合四区| 日本女优在线视频一区二区| 国产视频视频一区| 欧美美女bb生活片| 欧美日韩一级二级三级| 欧美一二三在线| 久久精品国产澳门| 麻豆精品一区二区综合av| 久久久一区二区三区捆绑**| 精品国产一区二区三区不卡| 国产在线精品一区二区| 精品国产a毛片| 一区二区在线看| 老司机免费视频一区二区 | 六月丁香综合在线视频| 亚洲福利一区二区三区| 国产二区国产一区在线观看| 亚洲欧美视频在线观看视频| 亚洲高清不卡在线观看| 久久国产麻豆精品| a4yy欧美一区二区三区| 69久久99精品久久久久婷婷| 久久日一线二线三线suv| 精品国产精品网麻豆系列| 欧美精品777| 久久免费视频一区| 亚洲日本丝袜连裤袜办公室| 视频一区欧美精品| 国产成人免费在线观看| 91美女视频网站| 欧美电影免费观看高清完整版在线| 精品国产一区二区三区忘忧草| 国产精品欧美一级免费| 免费在线欧美视频| 91在线精品一区二区| 日韩免费看的电影| 一二三区精品福利视频| 国产另类ts人妖一区二区| 色综合久久88色综合天天| 日韩视频免费观看高清完整版 | 丝袜美腿亚洲综合| 国产成人精品亚洲日本在线桃色| 在线亚洲一区观看| 国产欧美日韩卡一| 蜜臀99久久精品久久久久久软件| 99re66热这里只有精品3直播 | 亚洲综合无码一区二区| 国产精品亚洲成人| 欧美精品1区2区3区| 一区二区三区自拍| 国产成人综合亚洲91猫咪| 欧美午夜精品久久久久久超碰| 国产香蕉久久精品综合网| 久久99精品久久久久久动态图| 欧美日韩在线播放一区| 有码一区二区三区| av成人免费在线观看| 久久先锋影音av| 日韩精品国产欧美| 欧美三区在线观看| 一区二区三区欧美亚洲| 色就色 综合激情| 亚洲欧美日韩国产一区二区三区 | 久久蜜桃一区二区| 捆绑调教美女网站视频一区| 欧美区在线观看| 香蕉乱码成人久久天堂爱免费| 色哟哟在线观看一区二区三区| 欧美激情在线观看视频免费| 国产精品亚洲一区二区三区妖精| 2020国产精品| 国产精品123| 国产精品欧美经典| 91女人视频在线观看| 亚洲欧美自拍偷拍| 色婷婷激情一区二区三区| 亚洲欧美日韩国产另类专区| 色一情一乱一乱一91av| 亚洲成人资源网| 欧美精品乱人伦久久久久久| 蜜臀久久99精品久久久久宅男 | 99久久伊人网影院| 亚洲人妖av一区二区| 91丨porny丨最新| 一区二区免费视频| 欧美精品三级在线观看| 久久aⅴ国产欧美74aaa| 中文子幕无线码一区tr| 99久久久国产精品| 亚州成人在线电影| 欧美精品一区二区三区高清aⅴ | 久久国产免费看| 中文字幕 久热精品 视频在线| 91丨九色丨蝌蚪富婆spa| 午夜精品久久久久久久99樱桃| 91精品欧美福利在线观看| 国产另类ts人妖一区二区| 国产精品传媒在线| 制服丝袜亚洲精品中文字幕| 精品一区二区三区免费| 亚洲视频每日更新| 欧美一区二区三区公司| 激情亚洲综合在线| 亚洲欧美另类图片小说| 欧美一区二区三区视频在线观看 | 日日夜夜一区二区| 久久久91精品国产一区二区三区| 99久久精品免费| 日本aⅴ亚洲精品中文乱码| 久久精品一区蜜桃臀影院| 91久久精品日日躁夜夜躁欧美| 免费成人av资源网| 亚洲日本欧美天堂| 久久你懂得1024| 欧美三级韩国三级日本一级| 久久国产精品色婷婷| 亚洲精品视频在线观看免费| 亚洲精品一区在线观看| 欧美日韩国产另类一区| 成人精品gif动图一区| 日本少妇一区二区| 亚洲精品国产一区二区精华液 | 成人欧美一区二区三区| 日韩欧美一级精品久久| 在线一区二区观看| 99久久婷婷国产综合精品电影| 国内精品国产三级国产a久久| 亚洲国产精品一区二区www在线| 国产精品亲子伦对白| 久久久久国产精品免费免费搜索| 欧美午夜精品久久久久久孕妇 | 一区二区久久久| 综合激情成人伊人| 久久精品人人爽人人爽| 欧美电视剧免费观看| 欧美肥妇free| 欧美电影影音先锋| 欧美日韩一区二区在线观看| 91同城在线观看| 972aa.com艺术欧美| 成人免费高清视频在线观看| 国产精品99久久久久久久vr| 久久精品久久综合| 日本成人在线不卡视频| 亚洲超碰精品一区二区| 亚洲黄网站在线观看| 一区二区三区国产精华| 亚洲一区二区在线播放相泽| 亚洲永久免费视频| 午夜精品福利视频网站| 午夜精品久久久久久久| 日韩精品色哟哟| 男人的天堂亚洲一区| 久久99精品久久只有精品| 国产精品免费观看视频| 午夜精品久久久久久久99水蜜桃| 日本成人中文字幕在线视频| 欧美aa在线视频| 美女在线观看视频一区二区| 免费在线观看视频一区| 奇米色一区二区| 极品美女销魂一区二区三区| 国产盗摄女厕一区二区三区| voyeur盗摄精品| 色综合久久久网| 91精品国产综合久久婷婷香蕉| 精品国产一区久久| 国产精品久久三区| 亚洲国产美国国产综合一区二区| 日日骚欧美日韩| 国产夫妻精品视频| 日本韩国精品一区二区在线观看| 欧美精品色一区二区三区| 精品久久一区二区| 亚洲婷婷国产精品电影人久久| 午夜日韩在线观看| 国产一区二区三区不卡在线观看 | 精品一区二区免费看| 成人午夜短视频| 5858s免费视频成人| 国产欧美1区2区3区| 午夜视频在线观看一区二区| 国产精品888| 欧美日本视频在线| 久久久久久久久久久久久女国产乱| 国产精品每日更新|