99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

合肥生活安徽新聞合肥交通合肥房產(chǎn)生活服務合肥教育合肥招聘合肥旅游文化藝術(shù)合肥美食合肥地圖合肥社保合肥醫(yī)院企業(yè)服務合肥法律

代做 COMP9417、Python 語言程序代寫

時間:2024-03-25  來源:合肥網(wǎng)hfw.cc  作者:hfw.cc 我要糾錯



COMP9417 - Machine Learning Homework 2: Numerical Implementation of Logistic Regression
Introduction In homework 1, we considered Gradient Descent (and coordinate descent) for minimizing a regularized loss function. In this homework, we consider an alternative method known as Newton’s algorithm. We will first run Newton’s algorithm on a simple toy problem, and then implement it from scratch on a real data classification problem. We also look at the dual version of logistic regression.
Points Allocation There are a total of 30 marks.
• Question 1 a): 1 mark
• Question 1 b): 2 mark
• Question 2 a): 3 marks
• Question 2 b): 3 marks
• Question 2 c): 2 marks
• Question 2 d): 4 mark
• Question 2 e): 4 marks
• Question 2 f): 2 marks
• Question 2 g): 4 mark
• Question 2 h): 3 marks
• Question 2 i): 2 marks
What to Submit
• A single PDF file which contains solutions to each question. For each question, provide your solution in the form of text and requested plots. For some questions you will be requested to provide screen shots of code used to generate your answer — only include these when they are explicitly asked for.
• .py file(s) containing all code you used for the project, which should be provided in a separate .zip file. This code must match the code provided in the report.
• You may be deducted points for not following these instructions.
• You may be deducted points for poorly presented/formatted work. Please be neat and make your solutions clear. Start each question on a new page if necessary.
1

• You cannot submit a Jupyter notebook; this will receive a mark of zero. This does not stop you from developing your code in a notebook and then copying it into a .py file though, or using a tool such as nbconvert or similar.
• We will set up a Moodle forum for questions about this homework. Please read the existing questions before posting new questions. Please do some basic research online before posting questions. Please only post clarification questions. Any questions deemed to be fishing for answers will be ignored and/or deleted.
• Please check Moodle announcements for updates to this spec. It is your responsibility to check for announcements about the spec.
• Please complete your homework on your own, do not discuss your solution with other people in the course. General discussion of the problems is fine, but you must write out your own solution and acknowledge if you discussed any of the problems in your submission (including their name(s) and zID).
• As usual, we monitor all online forums such as Chegg, StackExchange, etc. Posting homework ques- tions on these site is equivalent to plagiarism and will result in a case of academic misconduct.
When and Where to Submit
• Due date: Week 7, Monday March 25th, 2024 by 5pm. Please note that the forum will not be actively monitored on weekends.
• Late submissions will incur a penalty of 5% per day from the maximum achievable grade. For ex- ample, if you achieve a grade of 80/100 but you submitted 3 days late, then your final grade will be 80 − 3 × 5 = 65. Submissions that are more than 5 days late will receive a mark of zero.
• Submission must be done through Moodle, no exceptions.
Page 2

Question 1. Introduction to Newton’s Method
Note: throughout this question do not use any existing implementations of any of the algorithms discussed unless explicitly asked to in the question. Using existing implementations can result in a grade of zero for the entire question. In homework 1 we studied gradient descent (GD), which is usually referred to as a first order method. Here, we study an alternative algorithm known as Newton’s algorithm, which is generally referred to as a second order method. Roughly speaking, a second order method makes use of both first and second derivatives. Generally, second order methods are much more accurate than first order ones. Given a twice differentiable function g : R → R, Newton’s method generates a sequence {x(k)} iteratively according to the following update rule:
x(k+1) = x(k) − g′(x(k)) , k = 0,1,2,..., (1) g′′(x(k))
For example, consider the function g(x) = 12 x2 − sin(x) with initial guess x(0) = 0. Then g′(x) = x − cos(x), and g′′(x) = 1 + sin(x),
and so we have the following iterations:
x(1) = x(0) − x(0) − cos(x0) = 0 − 0 − cos(0) = 1 1 + sin(x(0)) 1 + sin(0)
x(2) = x(1) − x(1) − cos(x1) = 1 − 1 − cos(1) = 0.750363867840244 1 + sin(x(1)) 1 + sin(1)
x(3) = 0.**91128**911362 .
and this continues until we terminate the algorithm (as a quick exercise for your own benefit, code this up, plot the function and each of the iterates). We note here that in practice, we often use a different update called the dampened Newton method, defined by:
x(k+1) =x(k) −αg′(xk), k=0,1,2,.... (2) g′′(xk)
Here, as in the case of GD, the step size α has the effect of ‘dampening’ the update. Consider now the twice differentiable function f : Rn → R. The Newton steps in this case are now:
x(k+1) =x(k) −(H(x(k)))−1∇f(x(k)), k=0,1,2,..., (3)
where H(x) = ∇2f(x) is the Hessian of f. Heuristically, this formula generalized equation (1) to func- tions with vector inputs since the gradient is the analog of the first derivative, and the Hessian is the analog of the second derivative.
(a) Consider the function f : R2 → R defined by f(x,y)=100(y−x2)2 +(1−x)2.
Create a 3D plot of the function using mplot3d (see lab0 for example). Use a range of [−5, 5] for both x and y axes. Further, compute the gradient and Hessian of f . what to submit: A single plot, the code used to generate the plot, the gradient and Hessian calculated along with all working. Add a copy of the code to solutions.py
Page 3
      
(b) Using NumPy only, implement the (undampened) Newton algorithm to find the minimizer of the function in the previous part, using an initial guess of x(0) = (−1.2, 1)T . Terminate the algorithm when 􏰀􏰀∇f(x(k))􏰀􏰀2 ≤ 10−6. Report the values of x(k) for k = 0, 1, . . . , K where K is your final iteration. what to submit: your iterations, and a screen shot of your code. Add a copy of the code to solutions.py
Question 2. Solving Logistic Regression Numerically
Note: throughout this question do not use any existing implementations of any of the algorithms discussed unless explicitly asked to do so in the question. Using existing implementations can result in a grade of zero for the entire question. In this question we will compare gradient descent and Newton’s algorithm for solving the logistic regression problem. Recall that in logistic regresion, our goal is to minimize the log-loss, also referred to as the cross entropy loss. Consider an intercept β0 ∈ R, parametervectorβ=(β1,...,βm)T ∈Rm,targetyi ∈{0,1}andinputvectorxi =(xi1,xi2,...,xip)T. Consider also the feature map φ : Rp → Rm and corresponding feature vector φi = (φi1 , φi2 , . . . , φim )T where φi = φ(xi). Define the (l2-regularized) log-loss function:
12λ􏰈n􏰃􏰁1⭺**; 􏰁1⭺**;􏰄
L(β0, β) = 2 ∥β∥2 + n
where σ(z) = (1+e−z)−1 is the logistic sigmoid, and λ is a hyper-parameter that controls the amount of regularization. Note that λ here is applied to the data-fit term as opposed to the penalty term directly, but all that changes is that larger λ now means more emphasis on data-fitting and less on regularization. Note also that you are provided with an implementation of this loss in helper.py.
(a) Show that the gradient descent update (with step size α) for γ = [β0, βT ]T takes the form
  γ(k)=γ(k−1)−α×
􏰅 − λ 1T (y − σ(β(k−1)1 + Φβ(k−1))) 􏰆 n n 0 n ,
β(k−1) − λ ΦT (y − σ(β(k−1)1 + Φβ(k−1))) n0n
i=1
yi ln σ(β0 + βT φi) + (1 − yi) ln 1 − σ(β0 + βT φi) ,
where the sigmoid σ(·) is applied elementwise, 1n is the n-dimensional vector of ones and
 φ T1 ?**7;
 φ T2 ?**8; Φ= . ?**8;∈R
what to submit: your working out.
(b) In what follows, we refer to the version of the problem based on L(β0,β) as the Primal version. Consider the re-parameterization: β = 􏰇nj=1 θjφ(xj). Show that the loss can now be written as:
1Tλ􏰈n􏰃􏰁1⭺**; 􏰁1⭺**;􏰄
L(θ0,θ)=2θ Aθ+n
i=1
n × m
.?**9; .?**9;
,
φTn yn
yiln σ(θ0+θTbxi) +(1−yi)ln 1−σ(θ0+θTbxi) .
whereθ0 ∈R,θ=(θ1,...,θn)T ∈Rn,A∈Rn×nandfori=1,...,n,bxi ∈Rn.Werefertothis version of the problem as the Dual version. Write down exact expressions for A and bxi in terms of k(xi,xj) := ⟨φ(xi),φ(xj)⟩ for i,j = 1,...,n. Further, for the dual parameter η = [θ0,θT ]T , show that the gradient descent update is given by:
 y 1 ?**7;
 y 2 ?**8; n y= . ?**8;∈R .
  η(k)=η(k−1)−α×
􏰅
− λ 1T (y − σ(θ(k−1)1 + Aθ(k−1))) n n 0 n
Aθ(k−1) − λ A(y − σ(θ(k−1)1 + Aθ(k−1))) n0n
Page 4
􏰆
,

If m ≫ n, what is the advantage of the dual representation relative to the primal one which just makes use of the feature maps φ directly? what to submit: your working along with some commentary.
(c) We will now compare the performance of (primal/dual) GD and the Newton algorithm on a real dataset using the derived updates in the previous parts. To do this, we will work with the songs.csv dataset. The data contains information about various songs, and also contains a class variable outlining the genre of the song. If you are interested, you can read more about the data here, though a deep understanding of each of the features will not be crucial for the purposes of this assessment. Load in the data and preform the follwing preprocessing:
(I) Remove the following features: ”Artist Name”, ”Track Name”, ”key”, ”mode”, ”time signature”, ”instrumentalness”
(II) The current dataset has 10 classes, but logistic regression in the form we have described it here only works for binary classification. We will restrict the data to classes 5 (hiphop) and 9 (pop). After removing the other classes, re-code the variables so that the target variable is y = 1 for hiphop and y = 0 for pop.
(III) Remove any remaining rows that have missing values for any of the features. Your remaining dataset should have a total of 3886 rows.
(IV) Use the sklearn.model selection.train test split function to split your data into X train, X test, Y train and Y test. Use a test size of 0.3 and a random state of 23 for reproducibility.
(V) Fit the sklearn.preprocessing.MinMaxScaler to the resulting training data, and then use this object to scale both your train and test datasets so that the range of the data is in (0, 0.1).
(VI) Print out the first and last row of X train, X test, y train, y test (but only the first 3 columns of X train, X test).
What to submit: the print out of the rows requested in (VI). A copy of your code in solutions.py
(d) For the primal problem, we will use the feature map that generates all polynomial features up to and including order 3, that is:
φ(x) = [1,x1,...,xp,x31,...,x3p,x1x2x3,...,xp−1xp−2xp−1].
In python, we can generate such features using sklearn.preprocessing.PolynomialFeatures.
For example, consider the following code snippet:
1 2 3 4 5
1if you need a sanity check here, the best thing to do is use sklearn to fit logistic regression models. This should give you an idea of what kind of loss your implementation should be achieving (if your implementation does as well or better, then you are on the right track)
Page 5
 from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(3)
 X = np.arange(6).reshape(3, 2)
poly.fit_transform(X)
 Transform the data appropriately, then run gradient descent with α = 0.4 on the training dataset for 50 epochs and λ = 0.5. In your implementation, initalize β(0) = 0, β(0) = 0 , where 0 is the
0pp p-dimensional vector of zeroes. Report your final train and test losses, as well as plots of training loss at each iteration. 1 what to submit: one plot of the train losses. Report your train and test losses, and
a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
 
(e) Fortheprimalproblem,runthedampenedNewtonalgorithmonthetrainingdatasetfor50epochs and λ = 0.5. Use the same initialization for β0,β as in the previous question. Report your final train and test losses, as well as plots of your train loss for both GD and Newton algorithms for all iterations (use labels/legends to make your plot easy to read). In your implementation, you may use that the Hessian for the primal problem is given by:
λ1TDΦ 􏰄 n n ,
where D is the n × n diagonal matrix with i-th element σ(di)(1 − σ(di)) and di = β0 + φTi β. what to submit: one plot of the train losses. Report your train and test losses, and a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
(f) For the feature map used in the previous two questions, what is the correspongdin kernel k(x, y) that can be used to give the corresponding dual problem? what to submit: the chosen kernel.
H(β,β)= n n n
􏰃λ1TD1
0 λ ΦT D1n nn
(g) Implement Gradient Descent for the dual problem using the kernel found in the previous part. Use the same parameter values as before (although now θ(0) = 0 and θ(0) = 0 ). Report your final
0n
training loss, as well as plots of your train loss for GD for all iterations. what to submit: a plot of the
train losses and report your final train loss, and a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
(h) Explain how to compute the test loss for the GD solution to the dual problem in the previous part. Implement this approach and report the test loss. what to submit: some commentary and a screen shot of your code, and a copy of your code in solutions.py.
(i) In general, it turns out that Newton’s method is much better than GD, in fact convergence of the Newton algorithm is quadratic, whereas convergence of GD is linear (much slower than quadratic). Given this, why do you think gradient descent and its variants (e.g. SGD) are much more popular for solving machine learning problems? what to submit: some commentary
請加QQ:99515681  郵箱:99515681@qq.com   WX:codehelp 

掃一掃在手機打開當前頁
  • 上一篇:代寫 CS 20A、代做 C++語言程序
  • 下一篇:人在國外辦理菲律賓簽證需要什么材料呢
  • 無相關(guān)信息
    合肥生活資訊

    合肥圖文信息
    2025年10月份更新拼多多改銷助手小象助手多多出評軟件
    2025年10月份更新拼多多改銷助手小象助手多
    有限元分析 CAE仿真分析服務-企業(yè)/產(chǎn)品研發(fā)/客戶要求/設計優(yōu)化
    有限元分析 CAE仿真分析服務-企業(yè)/產(chǎn)品研發(fā)
    急尋熱仿真分析?代做熱仿真服務+熱設計優(yōu)化
    急尋熱仿真分析?代做熱仿真服務+熱設計優(yōu)化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發(fā)動機性能
    挖掘機濾芯提升發(fā)動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現(xiàn)代科技完美結(jié)合
    海信羅馬假日洗衣機亮相AWE 復古美學與現(xiàn)代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
  • 短信驗證碼 trae 豆包網(wǎng)頁版入口 目錄網(wǎng) 排行網(wǎng)

    關(guān)于我們 | 打賞支持 | 廣告服務 | 聯(lián)系我們 | 網(wǎng)站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網(wǎng) 版權(quán)所有
    ICP備06013414號-3 公安備 42010502001045

    99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

          9000px;">

                亚洲一区日韩精品中文字幕| 色狠狠桃花综合| 精品国产亚洲在线| 国产精品色婷婷| 夜夜操天天操亚洲| 欧洲生活片亚洲生活在线观看| 99国产欧美久久久精品| 亚洲欧美一区二区在线观看| av资源网一区| 亚洲成av人**亚洲成av**| 欧美天堂一区二区三区| 一区二区久久久| 日韩色在线观看| 午夜精品福利一区二区蜜股av | 中文欧美字幕免费| 蜜臀av国产精品久久久久 | 亚洲福利一区二区| 欧美精品一区二区精品网| 亚洲资源在线观看| 久久精品夜色噜噜亚洲a∨| 91色在线porny| 久久99精品久久久久久| 亚洲一区二区三区自拍| 国产日韩视频一区二区三区| 欧美丝袜丝nylons| 成人激情图片网| 免费成人在线观看视频| 亚洲日本一区二区三区| 久久综合久久综合久久综合| 欧美乱妇15p| 9久草视频在线视频精品| 国产一区视频在线看| 亚洲国产成人高清精品| 中文字幕一区二区三| 久久麻豆一区二区| 日韩欧美在线综合网| 欧美日韩在线三区| 日本乱人伦一区| 99re成人在线| caoporn国产精品| 国产精品123区| 国产伦精品一区二区三区视频青涩 | 91成人免费网站| 国产成人在线看| 国产精品一区二区久激情瑜伽| 三级一区在线视频先锋| 图片区小说区国产精品视频| 亚洲成av人片| 视频在线观看91| 视频一区二区欧美| 蜜臀久久99精品久久久久久9| 午夜久久电影网| 日韩综合小视频| 日本伊人色综合网| 麻豆精品在线观看| 国内外成人在线| 国产精品一二三四五| 国产精品18久久久久| 精品亚洲成a人在线观看 | 免费一级片91| 免费看日韩精品| 韩国精品在线观看| 高清日韩电视剧大全免费| 国产91高潮流白浆在线麻豆| 国产成人aaaa| 91免费观看在线| 欧美日韩一区二区三区在线| 日韩精品一区二区在线| 亚洲国产精品av| 一区二区三区小说| 日韩精品久久久久久| 国产一区二区三区美女| 成人av免费网站| 欧美日韩国产三级| 久久久久国产精品麻豆ai换脸| 国产日韩欧美a| 亚洲综合区在线| 久久国产精品露脸对白| 99视频在线观看一区三区| 色噜噜久久综合| 欧美三级日韩三级国产三级| 精品国产乱码久久久久久夜甘婷婷 | 欧美一区中文字幕| 国产偷国产偷亚洲高清人白洁| 国产精品久久免费看| 亚洲国产wwwccc36天堂| 毛片av中文字幕一区二区| 国产成a人无v码亚洲福利| 91视频com| 久久综合视频网| 亚洲丰满少妇videoshd| 成人激情视频网站| 日韩女优电影在线观看| 亚洲精品水蜜桃| 国产成人亚洲综合色影视| 欧美精品在线观看播放| 亚洲摸摸操操av| 国产美女久久久久| 欧美一区二区三区思思人 | 国产精品高潮呻吟| 看片网站欧美日韩| 欧美色图天堂网| 《视频一区视频二区| 国产伦精品一区二区三区免费 | 午夜久久福利影院| 91丨porny丨户外露出| 久久精品一区二区| 蜜臀99久久精品久久久久久软件| 91网站最新网址| 国产视频一区不卡| 国产精品正在播放| 久久午夜色播影院免费高清| 亚洲成av人片一区二区三区| 成av人片一区二区| 国产清纯美女被跳蛋高潮一区二区久久w | 亚洲精品免费看| 99综合影院在线| 国产精品进线69影院| 国产91精品在线观看| 国产亚洲婷婷免费| 国产成人av影院| 国产午夜三级一区二区三| 韩日av一区二区| 久久精品视频免费| 成av人片一区二区| 亚洲精品网站在线观看| 91污片在线观看| 亚洲综合成人网| 欧洲人成人精品| 天天操天天综合网| 精品久久国产字幕高潮| 国产一区视频网站| 国产精品成人免费在线| 91蜜桃网址入口| 亚洲一区视频在线观看视频| 在线成人av影院| 国产一区二区三区久久久 | 国产欧美一区二区精品秋霞影院 | 国产成人精品综合在线观看 | 国产精品国产成人国产三级| 色婷婷综合中文久久一本| 亚洲一区免费在线观看| 91精品国产黑色紧身裤美女| 国产一区二区三区香蕉| 中文字幕亚洲综合久久菠萝蜜| 欧美亚洲综合一区| 看片的网站亚洲| 中文字幕欧美一区| 91精品国产乱| av影院午夜一区| 天天操天天色综合| 国产精品私房写真福利视频| 欧美日韩精品一区二区天天拍小说 | 国产成人亚洲综合色影视| 亚洲女与黑人做爰| 精品少妇一区二区三区日产乱码 | 国产一区不卡视频| 亚洲日本在线天堂| 精品久久久久久综合日本欧美| 成人黄动漫网站免费app| 亚洲高清视频的网址| 国产夜色精品一区二区av| 91黄视频在线观看| 国产成人亚洲综合a∨婷婷图片 | 91精品国产免费久久综合| 国产乱一区二区| 亚洲影院免费观看| 国产片一区二区| 日韩一区二区精品| 欧亚洲嫩模精品一区三区| 成人深夜视频在线观看| 麻豆国产91在线播放| 亚洲一区二区视频| 成人免费在线播放视频| 欧美精品一区二区久久久| 欧美日韩高清一区| 91电影在线观看| 成人免费高清视频在线观看| 蜜桃av一区二区| 亚洲图片欧美一区| 综合亚洲深深色噜噜狠狠网站| 精品国产乱码久久| 5566中文字幕一区二区电影 | 午夜精品久久一牛影视| 亚洲图片你懂的| 国产日韩在线不卡| 2023国产精品| 精品成人私密视频| 精品区一区二区| 日韩一区二区三区观看| 欧美电影影音先锋| 欧美男生操女生| 欧美人与禽zozo性伦| 欧美三电影在线| 欧美专区日韩专区| 欧美午夜寂寞影院| 欧美自拍偷拍一区| 欧美色爱综合网| 欧美日本一区二区在线观看| 欧美日韩免费电影|